2026-03-09T14:11:15.234 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-09T14:11:15.238 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-09T14:11:15.258 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/502 branch: squid description: orch/cephadm/rbd_iscsi/{base/install cluster/{fixed-3 openstack} conf/{disable-pool-app} supported-container-hosts$/{ubuntu_22.04} workloads/cephadm_iscsi} email: null first_in_suite: false flavor: default job_id: '502' ktype: distro last_in_suite: false machine_type: vps name: kyr-2026-03-09_11:23:05-orch-squid-none-default-vps no_nested_subset: false openstack: - machine: cpus: 1 disk: 40 ram: 8000 volumes: count: 4 size: 30 os_type: ubuntu os_version: '22.04' overrides: admin_socket: branch: squid ansible.cephlab: branch: main skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: UTC ceph: conf: global: mon warn on pool no app: false mgr: debug mgr: 20 debug ms: 1 mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug ms: 1 debug osd: 20 osd mclock iops capacity threshold hdd: 49000 flavor: default log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - MON_DOWN sha1: e911bdebe5c8faa3800735d1568fcdca65db60df ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} install: ceph: flavor: default sha1: e911bdebe5c8faa3800735d1568fcdca65db60df extra_system_packages: deb: - python3-xmltodict - python3-jmespath rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - python3-jmespath workunit: branch: tt-squid sha1: 569c3e99c9b32a51b4eaf08731c728f4513ed589 owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - host.a - mon.a - mgr.x - osd.0 - osd.1 - client.0 - ceph.iscsi.iscsi.a - - mon.b - osd.2 - osd.3 - osd.4 - client.1 - - mon.c - osd.5 - osd.6 - osd.7 - client.2 - ceph.iscsi.iscsi.b seed: 3443 sha1: e911bdebe5c8faa3800735d1568fcdca65db60df sleep_before_teardown: 0 subset: 1/64 suite: orch suite_branch: tt-squid suite_path: /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 569c3e99c9b32a51b4eaf08731c728f4513ed589 targets: vm03.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPP8Lv6hfpjPzIUIyjU0K0sPFvVDEjkOw3yHpJA+O0R3OpsmK3D2/iQtgUeOS9XwGSQ/S7v+9iH3mEX6luAWPW4= vm04.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE+T5zxskuZrV9yYMuxtPB+llfWXaWbVL53RH5KbzGAuen0RO6DNYcpHLJWUcayt331C+Fi4GKeqnLAwBAqRjVo= vm05.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNSwFw2zLryqyylIiXoV2cyKg8PIcpZ7qt1AyHTt29Vfb29Sgixaxly4lbnCBJwV26LP7iNnnymHmwdpwE62xzA= tasks: - cephadm: null - cephadm.shell: host.a: - ceph orch status - ceph orch ps - ceph orch ls - ceph orch host ls - ceph orch device ls - install: extra_system_packages: deb: - open-iscsi - multipath-tools rpm: - iscsi-initiator-utils - device-mapper-multipath - ceph_iscsi_client: clients: - client.1 - cram: clients: client.0: - src/test/cli-integration/rbd/gwcli_create.t client.1: - src/test/cli-integration/rbd/iscsi_client.t client.2: - src/test/cli-integration/rbd/gwcli_delete.t parallel: false - cram: clients: client.0: - src/test/cli-integration/rbd/rest_api_create.t client.1: - src/test/cli-integration/rbd/iscsi_client.t client.2: - src/test/cli-integration/rbd/rest_api_delete.t parallel: false teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-09_11:23:05 tube: vps use_shaman: true user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.611473 2026-03-09T14:11:15.258 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa; will attempt to use it 2026-03-09T14:11:15.258 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks 2026-03-09T14:11:15.258 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-09T14:11:15.259 INFO:teuthology.task.internal:Checking packages... 2026-03-09T14:11:15.259 INFO:teuthology.task.internal:Checking packages for os_type 'ubuntu', flavor 'default' and ceph hash 'e911bdebe5c8faa3800735d1568fcdca65db60df' 2026-03-09T14:11:15.259 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-09T14:11:15.259 INFO:teuthology.packaging:ref: None 2026-03-09T14:11:15.259 INFO:teuthology.packaging:tag: None 2026-03-09T14:11:15.259 INFO:teuthology.packaging:branch: squid 2026-03-09T14:11:15.259 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T14:11:15.259 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&ref=squid 2026-03-09T14:11:15.879 INFO:teuthology.task.internal:Found packages for ceph version 19.2.3-678-ge911bdeb-1jammy 2026-03-09T14:11:15.880 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-09T14:11:15.881 INFO:teuthology.task.internal:no buildpackages task found 2026-03-09T14:11:15.881 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-09T14:11:15.881 INFO:teuthology.task.internal:Saving configuration 2026-03-09T14:11:15.885 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-09T14:11:15.886 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-09T14:11:15.893 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm03.local', 'description': '/archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/502', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-09 14:09:34.227477', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:03', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPP8Lv6hfpjPzIUIyjU0K0sPFvVDEjkOw3yHpJA+O0R3OpsmK3D2/iQtgUeOS9XwGSQ/S7v+9iH3mEX6luAWPW4='} 2026-03-09T14:11:15.898 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm04.local', 'description': '/archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/502', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-09 14:09:34.226852', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:04', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE+T5zxskuZrV9yYMuxtPB+llfWXaWbVL53RH5KbzGAuen0RO6DNYcpHLJWUcayt331C+Fi4GKeqnLAwBAqRjVo='} 2026-03-09T14:11:15.903 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm05.local', 'description': '/archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/502', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-09 14:09:34.227259', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:05', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNSwFw2zLryqyylIiXoV2cyKg8PIcpZ7qt1AyHTt29Vfb29Sgixaxly4lbnCBJwV26LP7iNnnymHmwdpwE62xzA='} 2026-03-09T14:11:15.903 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-09T14:11:15.904 INFO:teuthology.task.internal:roles: ubuntu@vm03.local - ['host.a', 'mon.a', 'mgr.x', 'osd.0', 'osd.1', 'client.0', 'ceph.iscsi.iscsi.a'] 2026-03-09T14:11:15.904 INFO:teuthology.task.internal:roles: ubuntu@vm04.local - ['mon.b', 'osd.2', 'osd.3', 'osd.4', 'client.1'] 2026-03-09T14:11:15.904 INFO:teuthology.task.internal:roles: ubuntu@vm05.local - ['mon.c', 'osd.5', 'osd.6', 'osd.7', 'client.2', 'ceph.iscsi.iscsi.b'] 2026-03-09T14:11:15.904 INFO:teuthology.run_tasks:Running task console_log... 2026-03-09T14:11:15.910 DEBUG:teuthology.task.console_log:vm03 does not support IPMI; excluding 2026-03-09T14:11:15.915 DEBUG:teuthology.task.console_log:vm04 does not support IPMI; excluding 2026-03-09T14:11:15.920 DEBUG:teuthology.task.console_log:vm05 does not support IPMI; excluding 2026-03-09T14:11:15.921 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7f9a6c572170>, signals=[15]) 2026-03-09T14:11:15.921 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-09T14:11:15.921 INFO:teuthology.task.internal:Opening connections... 2026-03-09T14:11:15.921 DEBUG:teuthology.task.internal:connecting to ubuntu@vm03.local 2026-03-09T14:11:15.922 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm03.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T14:11:15.981 DEBUG:teuthology.task.internal:connecting to ubuntu@vm04.local 2026-03-09T14:11:15.982 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm04.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T14:11:16.044 DEBUG:teuthology.task.internal:connecting to ubuntu@vm05.local 2026-03-09T14:11:16.045 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm05.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T14:11:16.106 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-09T14:11:16.108 DEBUG:teuthology.orchestra.run.vm03:> uname -m 2026-03-09T14:11:16.111 INFO:teuthology.orchestra.run.vm03.stdout:x86_64 2026-03-09T14:11:16.111 DEBUG:teuthology.orchestra.run.vm03:> cat /etc/os-release 2026-03-09T14:11:16.156 INFO:teuthology.orchestra.run.vm03.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-09T14:11:16.156 INFO:teuthology.orchestra.run.vm03.stdout:NAME="Ubuntu" 2026-03-09T14:11:16.156 INFO:teuthology.orchestra.run.vm03.stdout:VERSION_ID="22.04" 2026-03-09T14:11:16.156 INFO:teuthology.orchestra.run.vm03.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-09T14:11:16.156 INFO:teuthology.orchestra.run.vm03.stdout:VERSION_CODENAME=jammy 2026-03-09T14:11:16.156 INFO:teuthology.orchestra.run.vm03.stdout:ID=ubuntu 2026-03-09T14:11:16.156 INFO:teuthology.orchestra.run.vm03.stdout:ID_LIKE=debian 2026-03-09T14:11:16.156 INFO:teuthology.orchestra.run.vm03.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-09T14:11:16.156 INFO:teuthology.orchestra.run.vm03.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-09T14:11:16.156 INFO:teuthology.orchestra.run.vm03.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-09T14:11:16.156 INFO:teuthology.orchestra.run.vm03.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-09T14:11:16.156 INFO:teuthology.orchestra.run.vm03.stdout:UBUNTU_CODENAME=jammy 2026-03-09T14:11:16.156 INFO:teuthology.lock.ops:Updating vm03.local on lock server 2026-03-09T14:11:16.161 DEBUG:teuthology.orchestra.run.vm04:> uname -m 2026-03-09T14:11:16.163 INFO:teuthology.orchestra.run.vm04.stdout:x86_64 2026-03-09T14:11:16.164 DEBUG:teuthology.orchestra.run.vm04:> cat /etc/os-release 2026-03-09T14:11:16.209 INFO:teuthology.orchestra.run.vm04.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-09T14:11:16.209 INFO:teuthology.orchestra.run.vm04.stdout:NAME="Ubuntu" 2026-03-09T14:11:16.209 INFO:teuthology.orchestra.run.vm04.stdout:VERSION_ID="22.04" 2026-03-09T14:11:16.209 INFO:teuthology.orchestra.run.vm04.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-09T14:11:16.209 INFO:teuthology.orchestra.run.vm04.stdout:VERSION_CODENAME=jammy 2026-03-09T14:11:16.209 INFO:teuthology.orchestra.run.vm04.stdout:ID=ubuntu 2026-03-09T14:11:16.209 INFO:teuthology.orchestra.run.vm04.stdout:ID_LIKE=debian 2026-03-09T14:11:16.209 INFO:teuthology.orchestra.run.vm04.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-09T14:11:16.209 INFO:teuthology.orchestra.run.vm04.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-09T14:11:16.209 INFO:teuthology.orchestra.run.vm04.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-09T14:11:16.209 INFO:teuthology.orchestra.run.vm04.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-09T14:11:16.209 INFO:teuthology.orchestra.run.vm04.stdout:UBUNTU_CODENAME=jammy 2026-03-09T14:11:16.209 INFO:teuthology.lock.ops:Updating vm04.local on lock server 2026-03-09T14:11:16.214 DEBUG:teuthology.orchestra.run.vm05:> uname -m 2026-03-09T14:11:16.217 INFO:teuthology.orchestra.run.vm05.stdout:x86_64 2026-03-09T14:11:16.217 DEBUG:teuthology.orchestra.run.vm05:> cat /etc/os-release 2026-03-09T14:11:16.264 INFO:teuthology.orchestra.run.vm05.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-09T14:11:16.264 INFO:teuthology.orchestra.run.vm05.stdout:NAME="Ubuntu" 2026-03-09T14:11:16.264 INFO:teuthology.orchestra.run.vm05.stdout:VERSION_ID="22.04" 2026-03-09T14:11:16.264 INFO:teuthology.orchestra.run.vm05.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-09T14:11:16.264 INFO:teuthology.orchestra.run.vm05.stdout:VERSION_CODENAME=jammy 2026-03-09T14:11:16.264 INFO:teuthology.orchestra.run.vm05.stdout:ID=ubuntu 2026-03-09T14:11:16.264 INFO:teuthology.orchestra.run.vm05.stdout:ID_LIKE=debian 2026-03-09T14:11:16.264 INFO:teuthology.orchestra.run.vm05.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-09T14:11:16.264 INFO:teuthology.orchestra.run.vm05.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-09T14:11:16.264 INFO:teuthology.orchestra.run.vm05.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-09T14:11:16.264 INFO:teuthology.orchestra.run.vm05.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-09T14:11:16.264 INFO:teuthology.orchestra.run.vm05.stdout:UBUNTU_CODENAME=jammy 2026-03-09T14:11:16.264 INFO:teuthology.lock.ops:Updating vm05.local on lock server 2026-03-09T14:11:16.268 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-09T14:11:16.270 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-09T14:11:16.271 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-09T14:11:16.271 DEBUG:teuthology.orchestra.run.vm03:> test '!' -e /home/ubuntu/cephtest 2026-03-09T14:11:16.272 DEBUG:teuthology.orchestra.run.vm04:> test '!' -e /home/ubuntu/cephtest 2026-03-09T14:11:16.273 DEBUG:teuthology.orchestra.run.vm05:> test '!' -e /home/ubuntu/cephtest 2026-03-09T14:11:16.307 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-09T14:11:16.308 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-09T14:11:16.308 DEBUG:teuthology.orchestra.run.vm03:> test -z $(ls -A /var/lib/ceph) 2026-03-09T14:11:16.318 DEBUG:teuthology.orchestra.run.vm04:> test -z $(ls -A /var/lib/ceph) 2026-03-09T14:11:16.319 DEBUG:teuthology.orchestra.run.vm05:> test -z $(ls -A /var/lib/ceph) 2026-03-09T14:11:16.320 INFO:teuthology.orchestra.run.vm03.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-09T14:11:16.321 INFO:teuthology.orchestra.run.vm04.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-09T14:11:16.352 INFO:teuthology.orchestra.run.vm05.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-09T14:11:16.352 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-09T14:11:16.360 DEBUG:teuthology.orchestra.run.vm03:> test -e /ceph-qa-ready 2026-03-09T14:11:16.363 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T14:11:16.591 DEBUG:teuthology.orchestra.run.vm04:> test -e /ceph-qa-ready 2026-03-09T14:11:16.593 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T14:11:16.836 DEBUG:teuthology.orchestra.run.vm05:> test -e /ceph-qa-ready 2026-03-09T14:11:16.838 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T14:11:17.066 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-09T14:11:17.067 INFO:teuthology.task.internal:Creating test directory... 2026-03-09T14:11:17.067 DEBUG:teuthology.orchestra.run.vm03:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-09T14:11:17.068 DEBUG:teuthology.orchestra.run.vm04:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-09T14:11:17.069 DEBUG:teuthology.orchestra.run.vm05:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-09T14:11:17.073 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-09T14:11:17.074 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-09T14:11:17.075 INFO:teuthology.task.internal:Creating archive directory... 2026-03-09T14:11:17.076 DEBUG:teuthology.orchestra.run.vm03:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-09T14:11:17.114 DEBUG:teuthology.orchestra.run.vm04:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-09T14:11:17.115 DEBUG:teuthology.orchestra.run.vm05:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-09T14:11:17.125 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-09T14:11:17.127 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-09T14:11:17.127 DEBUG:teuthology.orchestra.run.vm03:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-09T14:11:17.163 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T14:11:17.163 DEBUG:teuthology.orchestra.run.vm04:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-09T14:11:17.166 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T14:11:17.166 DEBUG:teuthology.orchestra.run.vm05:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-09T14:11:17.169 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T14:11:17.169 DEBUG:teuthology.orchestra.run.vm03:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-09T14:11:17.206 DEBUG:teuthology.orchestra.run.vm04:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-09T14:11:17.211 DEBUG:teuthology.orchestra.run.vm05:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-09T14:11:17.213 INFO:teuthology.orchestra.run.vm03.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T14:11:17.218 INFO:teuthology.orchestra.run.vm04.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T14:11:17.219 INFO:teuthology.orchestra.run.vm03.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T14:11:17.220 INFO:teuthology.orchestra.run.vm05.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T14:11:17.223 INFO:teuthology.orchestra.run.vm04.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T14:11:17.225 INFO:teuthology.orchestra.run.vm05.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T14:11:17.226 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-09T14:11:17.228 INFO:teuthology.task.internal:Configuring sudo... 2026-03-09T14:11:17.228 DEBUG:teuthology.orchestra.run.vm03:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-09T14:11:17.262 DEBUG:teuthology.orchestra.run.vm04:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-09T14:11:17.267 DEBUG:teuthology.orchestra.run.vm05:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-09T14:11:17.275 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-09T14:11:17.277 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-09T14:11:17.277 DEBUG:teuthology.orchestra.run.vm03:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-09T14:11:17.314 DEBUG:teuthology.orchestra.run.vm04:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-09T14:11:17.319 DEBUG:teuthology.orchestra.run.vm05:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-09T14:11:17.321 DEBUG:teuthology.orchestra.run.vm03:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T14:11:17.360 DEBUG:teuthology.orchestra.run.vm03:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T14:11:17.404 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T14:11:17.404 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-09T14:11:17.453 DEBUG:teuthology.orchestra.run.vm04:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T14:11:17.456 DEBUG:teuthology.orchestra.run.vm04:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T14:11:17.501 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T14:11:17.501 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-09T14:11:17.550 DEBUG:teuthology.orchestra.run.vm05:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T14:11:17.552 DEBUG:teuthology.orchestra.run.vm05:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T14:11:17.599 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T14:11:17.599 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-09T14:11:17.646 DEBUG:teuthology.orchestra.run.vm03:> sudo service rsyslog restart 2026-03-09T14:11:17.647 DEBUG:teuthology.orchestra.run.vm04:> sudo service rsyslog restart 2026-03-09T14:11:17.648 DEBUG:teuthology.orchestra.run.vm05:> sudo service rsyslog restart 2026-03-09T14:11:17.704 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-09T14:11:17.705 INFO:teuthology.task.internal:Starting timer... 2026-03-09T14:11:17.705 INFO:teuthology.run_tasks:Running task pcp... 2026-03-09T14:11:17.708 INFO:teuthology.run_tasks:Running task selinux... 2026-03-09T14:11:17.711 INFO:teuthology.task.selinux:Excluding vm03: VMs are not yet supported 2026-03-09T14:11:17.711 INFO:teuthology.task.selinux:Excluding vm04: VMs are not yet supported 2026-03-09T14:11:17.711 INFO:teuthology.task.selinux:Excluding vm05: VMs are not yet supported 2026-03-09T14:11:17.711 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-09T14:11:17.711 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-09T14:11:17.711 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-09T14:11:17.711 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-09T14:11:17.712 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'UTC'}} 2026-03-09T14:11:17.712 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/ceph/ceph-cm-ansible.git 2026-03-09T14:11:17.714 INFO:teuthology.repo_utils:Fetching github.com_ceph_ceph-cm-ansible_main from origin 2026-03-09T14:11:18.197 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main to origin/main 2026-03-09T14:11:18.202 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-09T14:11:18.203 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "UTC"}' -i /tmp/teuth_ansible_inventoryctwye9ue --limit vm03.local,vm04.local,vm05.local /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-09T14:13:40.419 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm03.local'), Remote(name='ubuntu@vm04.local'), Remote(name='ubuntu@vm05.local')] 2026-03-09T14:13:40.419 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm03.local' 2026-03-09T14:13:40.420 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm03.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T14:13:40.477 DEBUG:teuthology.orchestra.run.vm03:> true 2026-03-09T14:13:40.668 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm03.local' 2026-03-09T14:13:40.668 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm04.local' 2026-03-09T14:13:40.668 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm04.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T14:13:40.726 DEBUG:teuthology.orchestra.run.vm04:> true 2026-03-09T14:13:40.916 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm04.local' 2026-03-09T14:13:40.916 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm05.local' 2026-03-09T14:13:40.916 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm05.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T14:13:40.974 DEBUG:teuthology.orchestra.run.vm05:> true 2026-03-09T14:13:41.164 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm05.local' 2026-03-09T14:13:41.164 INFO:teuthology.run_tasks:Running task clock... 2026-03-09T14:13:41.166 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-09T14:13:41.166 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-09T14:13:41.167 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T14:13:41.168 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-09T14:13:41.168 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T14:13:41.169 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-09T14:13:41.169 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T14:13:41.183 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 14:13:41 ntpd[16126]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-09T14:13:41.184 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 14:13:41 ntpd[16126]: Command line: ntpd -gq 2026-03-09T14:13:41.184 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 14:13:41 ntpd[16126]: ---------------------------------------------------- 2026-03-09T14:13:41.184 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 14:13:41 ntpd[16126]: ntp-4 is maintained by Network Time Foundation, 2026-03-09T14:13:41.184 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 14:13:41 ntpd[16126]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-09T14:13:41.184 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 14:13:41 ntpd[16126]: corporation. Support and training for ntp-4 are 2026-03-09T14:13:41.184 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 14:13:41 ntpd[16126]: available at https://www.nwtime.org/support 2026-03-09T14:13:41.184 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 14:13:41 ntpd[16126]: ---------------------------------------------------- 2026-03-09T14:13:41.184 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 14:13:41 ntpd[16126]: proto: precision = 0.030 usec (-25) 2026-03-09T14:13:41.185 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 14:13:41 ntpd[16126]: basedate set to 2022-02-04 2026-03-09T14:13:41.185 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 14:13:41 ntpd[16126]: gps base set to 2022-02-06 (week 2196) 2026-03-09T14:13:41.185 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 14:13:41 ntpd[16126]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-09T14:13:41.185 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 14:13:41 ntpd[16126]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-09T14:13:41.185 INFO:teuthology.orchestra.run.vm03.stderr: 9 Mar 14:13:41 ntpd[16126]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 72 days ago 2026-03-09T14:13:41.186 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 14:13:41 ntpd[16111]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-09T14:13:41.186 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 14:13:41 ntpd[16111]: Command line: ntpd -gq 2026-03-09T14:13:41.186 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 14:13:41 ntpd[16111]: ---------------------------------------------------- 2026-03-09T14:13:41.186 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 14:13:41 ntpd[16111]: ntp-4 is maintained by Network Time Foundation, 2026-03-09T14:13:41.186 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 14:13:41 ntpd[16111]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-09T14:13:41.186 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 14:13:41 ntpd[16111]: corporation. Support and training for ntp-4 are 2026-03-09T14:13:41.186 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 14:13:41 ntpd[16111]: available at https://www.nwtime.org/support 2026-03-09T14:13:41.186 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 14:13:41 ntpd[16111]: ---------------------------------------------------- 2026-03-09T14:13:41.186 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 14:13:41 ntpd[16111]: proto: precision = 0.029 usec (-25) 2026-03-09T14:13:41.186 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 14:13:41 ntpd[16111]: basedate set to 2022-02-04 2026-03-09T14:13:41.186 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 14:13:41 ntpd[16111]: gps base set to 2022-02-06 (week 2196) 2026-03-09T14:13:41.186 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 14:13:41 ntpd[16111]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-09T14:13:41.186 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 14:13:41 ntpd[16111]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-09T14:13:41.186 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 14:13:41 ntpd[16111]: Listen and drop on 0 v6wildcard [::]:123 2026-03-09T14:13:41.186 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 14:13:41 ntpd[16111]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-09T14:13:41.186 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 14:13:41 ntpd[16111]: Listen normally on 2 lo 127.0.0.1:123 2026-03-09T14:13:41.186 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 14:13:41 ntpd[16111]: Listen normally on 3 ens3 192.168.123.104:123 2026-03-09T14:13:41.186 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 14:13:41 ntpd[16111]: Listen normally on 4 lo [::1]:123 2026-03-09T14:13:41.186 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 14:13:41 ntpd[16111]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:4%2]:123 2026-03-09T14:13:41.186 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 14:13:41 ntpd[16111]: Listening on routing socket on fd #22 for interface updates 2026-03-09T14:13:41.186 INFO:teuthology.orchestra.run.vm04.stderr: 9 Mar 14:13:41 ntpd[16111]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 72 days ago 2026-03-09T14:13:41.187 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 14:13:41 ntpd[16126]: Listen and drop on 0 v6wildcard [::]:123 2026-03-09T14:13:41.187 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 14:13:41 ntpd[16126]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-09T14:13:41.187 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 14:13:41 ntpd[16126]: Listen normally on 2 lo 127.0.0.1:123 2026-03-09T14:13:41.187 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 14:13:41 ntpd[16126]: Listen normally on 3 ens3 192.168.123.103:123 2026-03-09T14:13:41.187 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 14:13:41 ntpd[16126]: Listen normally on 4 lo [::1]:123 2026-03-09T14:13:41.187 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 14:13:41 ntpd[16126]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:3%2]:123 2026-03-09T14:13:41.187 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 14:13:41 ntpd[16126]: Listening on routing socket on fd #22 for interface updates 2026-03-09T14:13:41.219 INFO:teuthology.orchestra.run.vm05.stdout: 9 Mar 14:13:41 ntpd[16171]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-09T14:13:41.219 INFO:teuthology.orchestra.run.vm05.stdout: 9 Mar 14:13:41 ntpd[16171]: Command line: ntpd -gq 2026-03-09T14:13:41.219 INFO:teuthology.orchestra.run.vm05.stdout: 9 Mar 14:13:41 ntpd[16171]: ---------------------------------------------------- 2026-03-09T14:13:41.219 INFO:teuthology.orchestra.run.vm05.stdout: 9 Mar 14:13:41 ntpd[16171]: ntp-4 is maintained by Network Time Foundation, 2026-03-09T14:13:41.219 INFO:teuthology.orchestra.run.vm05.stdout: 9 Mar 14:13:41 ntpd[16171]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-09T14:13:41.219 INFO:teuthology.orchestra.run.vm05.stdout: 9 Mar 14:13:41 ntpd[16171]: corporation. Support and training for ntp-4 are 2026-03-09T14:13:41.219 INFO:teuthology.orchestra.run.vm05.stdout: 9 Mar 14:13:41 ntpd[16171]: available at https://www.nwtime.org/support 2026-03-09T14:13:41.219 INFO:teuthology.orchestra.run.vm05.stdout: 9 Mar 14:13:41 ntpd[16171]: ---------------------------------------------------- 2026-03-09T14:13:41.220 INFO:teuthology.orchestra.run.vm05.stdout: 9 Mar 14:13:41 ntpd[16171]: proto: precision = 0.029 usec (-25) 2026-03-09T14:13:41.220 INFO:teuthology.orchestra.run.vm05.stdout: 9 Mar 14:13:41 ntpd[16171]: basedate set to 2022-02-04 2026-03-09T14:13:41.220 INFO:teuthology.orchestra.run.vm05.stdout: 9 Mar 14:13:41 ntpd[16171]: gps base set to 2022-02-06 (week 2196) 2026-03-09T14:13:41.220 INFO:teuthology.orchestra.run.vm05.stdout: 9 Mar 14:13:41 ntpd[16171]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-09T14:13:41.220 INFO:teuthology.orchestra.run.vm05.stdout: 9 Mar 14:13:41 ntpd[16171]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-09T14:13:41.220 INFO:teuthology.orchestra.run.vm05.stderr: 9 Mar 14:13:41 ntpd[16171]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 72 days ago 2026-03-09T14:13:41.221 INFO:teuthology.orchestra.run.vm05.stdout: 9 Mar 14:13:41 ntpd[16171]: Listen and drop on 0 v6wildcard [::]:123 2026-03-09T14:13:41.221 INFO:teuthology.orchestra.run.vm05.stdout: 9 Mar 14:13:41 ntpd[16171]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-09T14:13:41.221 INFO:teuthology.orchestra.run.vm05.stdout: 9 Mar 14:13:41 ntpd[16171]: Listen normally on 2 lo 127.0.0.1:123 2026-03-09T14:13:41.221 INFO:teuthology.orchestra.run.vm05.stdout: 9 Mar 14:13:41 ntpd[16171]: Listen normally on 3 ens3 192.168.123.105:123 2026-03-09T14:13:41.222 INFO:teuthology.orchestra.run.vm05.stdout: 9 Mar 14:13:41 ntpd[16171]: Listen normally on 4 lo [::1]:123 2026-03-09T14:13:41.222 INFO:teuthology.orchestra.run.vm05.stdout: 9 Mar 14:13:41 ntpd[16171]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:5%2]:123 2026-03-09T14:13:41.222 INFO:teuthology.orchestra.run.vm05.stdout: 9 Mar 14:13:41 ntpd[16171]: Listening on routing socket on fd #22 for interface updates 2026-03-09T14:13:42.185 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 14:13:42 ntpd[16111]: Soliciting pool server 94.16.122.152 2026-03-09T14:13:42.185 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 14:13:42 ntpd[16126]: Soliciting pool server 94.16.122.152 2026-03-09T14:13:42.221 INFO:teuthology.orchestra.run.vm05.stdout: 9 Mar 14:13:42 ntpd[16171]: Soliciting pool server 172.104.134.72 2026-03-09T14:13:43.184 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 14:13:43 ntpd[16126]: Soliciting pool server 202.61.195.221 2026-03-09T14:13:43.184 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 14:13:43 ntpd[16111]: Soliciting pool server 202.61.195.221 2026-03-09T14:13:43.185 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 14:13:43 ntpd[16126]: Soliciting pool server 85.220.190.246 2026-03-09T14:13:43.185 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 14:13:43 ntpd[16111]: Soliciting pool server 85.220.190.246 2026-03-09T14:13:43.220 INFO:teuthology.orchestra.run.vm05.stdout: 9 Mar 14:13:43 ntpd[16171]: Soliciting pool server 94.16.122.152 2026-03-09T14:13:43.220 INFO:teuthology.orchestra.run.vm05.stdout: 9 Mar 14:13:43 ntpd[16171]: Soliciting pool server 139.162.156.95 2026-03-09T14:13:44.183 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 14:13:44 ntpd[16126]: Soliciting pool server 212.132.75.208 2026-03-09T14:13:44.184 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 14:13:44 ntpd[16126]: Soliciting pool server 185.252.140.125 2026-03-09T14:13:44.184 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 14:13:44 ntpd[16126]: Soliciting pool server 185.197.135.6 2026-03-09T14:13:44.185 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 14:13:44 ntpd[16111]: Soliciting pool server 212.132.75.208 2026-03-09T14:13:44.185 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 14:13:44 ntpd[16111]: Soliciting pool server 185.252.140.125 2026-03-09T14:13:44.185 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 14:13:44 ntpd[16111]: Soliciting pool server 185.197.135.6 2026-03-09T14:13:44.220 INFO:teuthology.orchestra.run.vm05.stdout: 9 Mar 14:13:44 ntpd[16171]: Soliciting pool server 85.220.190.246 2026-03-09T14:13:44.220 INFO:teuthology.orchestra.run.vm05.stdout: 9 Mar 14:13:44 ntpd[16171]: Soliciting pool server 202.61.195.221 2026-03-09T14:13:44.220 INFO:teuthology.orchestra.run.vm05.stdout: 9 Mar 14:13:44 ntpd[16171]: Soliciting pool server 142.132.200.241 2026-03-09T14:13:45.183 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 14:13:45 ntpd[16126]: Soliciting pool server 185.13.148.71 2026-03-09T14:13:45.183 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 14:13:45 ntpd[16126]: Soliciting pool server 139.162.152.20 2026-03-09T14:13:45.183 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 14:13:45 ntpd[16126]: Soliciting pool server 172.104.134.72 2026-03-09T14:13:45.185 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 14:13:45 ntpd[16111]: Soliciting pool server 185.13.148.71 2026-03-09T14:13:45.185 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 14:13:45 ntpd[16111]: Soliciting pool server 139.162.152.20 2026-03-09T14:13:45.185 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 14:13:45 ntpd[16111]: Soliciting pool server 172.104.134.72 2026-03-09T14:13:45.220 INFO:teuthology.orchestra.run.vm05.stdout: 9 Mar 14:13:45 ntpd[16171]: Soliciting pool server 185.197.135.6 2026-03-09T14:13:45.220 INFO:teuthology.orchestra.run.vm05.stdout: 9 Mar 14:13:45 ntpd[16171]: Soliciting pool server 212.132.75.208 2026-03-09T14:13:45.220 INFO:teuthology.orchestra.run.vm05.stdout: 9 Mar 14:13:45 ntpd[16171]: Soliciting pool server 185.252.140.125 2026-03-09T14:13:45.344 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 14:13:45 ntpd[16111]: Soliciting pool server 139.162.187.236 2026-03-09T14:13:45.345 INFO:teuthology.orchestra.run.vm05.stdout: 9 Mar 14:13:45 ntpd[16171]: Soliciting pool server 139.162.187.236 2026-03-09T14:13:45.345 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 14:13:45 ntpd[16126]: Soliciting pool server 139.162.187.236 2026-03-09T14:13:46.182 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 14:13:46 ntpd[16126]: Soliciting pool server 116.203.218.109 2026-03-09T14:13:46.183 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 14:13:46 ntpd[16126]: Soliciting pool server 94.130.35.4 2026-03-09T14:13:46.183 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 14:13:46 ntpd[16126]: Soliciting pool server 139.162.156.95 2026-03-09T14:13:46.183 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 14:13:46 ntpd[16126]: Soliciting pool server 185.125.190.58 2026-03-09T14:13:46.185 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 14:13:46 ntpd[16111]: Soliciting pool server 116.203.218.109 2026-03-09T14:13:46.185 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 14:13:46 ntpd[16111]: Soliciting pool server 94.130.35.4 2026-03-09T14:13:46.185 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 14:13:46 ntpd[16111]: Soliciting pool server 139.162.156.95 2026-03-09T14:13:46.185 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 14:13:46 ntpd[16111]: Soliciting pool server 185.125.190.58 2026-03-09T14:13:46.220 INFO:teuthology.orchestra.run.vm05.stdout: 9 Mar 14:13:46 ntpd[16171]: Soliciting pool server 116.203.218.109 2026-03-09T14:13:46.220 INFO:teuthology.orchestra.run.vm05.stdout: 9 Mar 14:13:46 ntpd[16171]: Soliciting pool server 185.13.148.71 2026-03-09T14:13:46.220 INFO:teuthology.orchestra.run.vm05.stdout: 9 Mar 14:13:46 ntpd[16171]: Soliciting pool server 139.162.152.20 2026-03-09T14:13:46.220 INFO:teuthology.orchestra.run.vm05.stdout: 9 Mar 14:13:46 ntpd[16171]: Soliciting pool server 185.125.190.57 2026-03-09T14:13:47.182 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 14:13:47 ntpd[16126]: Soliciting pool server 91.189.91.157 2026-03-09T14:13:47.182 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 14:13:47 ntpd[16126]: Soliciting pool server 46.41.21.10 2026-03-09T14:13:47.182 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 14:13:47 ntpd[16126]: Soliciting pool server 142.132.200.241 2026-03-09T14:13:47.185 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 14:13:47 ntpd[16111]: Soliciting pool server 91.189.91.157 2026-03-09T14:13:47.185 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 14:13:47 ntpd[16111]: Soliciting pool server 46.41.21.10 2026-03-09T14:13:47.185 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 14:13:47 ntpd[16111]: Soliciting pool server 142.132.200.241 2026-03-09T14:13:47.220 INFO:teuthology.orchestra.run.vm05.stdout: 9 Mar 14:13:47 ntpd[16171]: Soliciting pool server 185.125.190.58 2026-03-09T14:13:47.220 INFO:teuthology.orchestra.run.vm05.stdout: 9 Mar 14:13:47 ntpd[16171]: Soliciting pool server 46.41.21.10 2026-03-09T14:13:47.220 INFO:teuthology.orchestra.run.vm05.stdout: 9 Mar 14:13:47 ntpd[16171]: Soliciting pool server 94.130.35.4 2026-03-09T14:13:49.210 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 14:13:49 ntpd[16126]: ntpd: time slew +0.000254 s 2026-03-09T14:13:49.210 INFO:teuthology.orchestra.run.vm03.stdout:ntpd: time slew +0.000254s 2026-03-09T14:13:49.229 INFO:teuthology.orchestra.run.vm03.stdout: remote refid st t when poll reach delay offset jitter 2026-03-09T14:13:49.229 INFO:teuthology.orchestra.run.vm03.stdout:============================================================================== 2026-03-09T14:13:49.229 INFO:teuthology.orchestra.run.vm03.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:13:49.229 INFO:teuthology.orchestra.run.vm03.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:13:49.229 INFO:teuthology.orchestra.run.vm03.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:13:49.229 INFO:teuthology.orchestra.run.vm03.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:13:49.229 INFO:teuthology.orchestra.run.vm03.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:13:51.210 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 14:13:51 ntpd[16111]: ntpd: time slew +0.000321 s 2026-03-09T14:13:51.210 INFO:teuthology.orchestra.run.vm04.stdout:ntpd: time slew +0.000321s 2026-03-09T14:13:51.227 INFO:teuthology.orchestra.run.vm04.stdout: remote refid st t when poll reach delay offset jitter 2026-03-09T14:13:51.227 INFO:teuthology.orchestra.run.vm04.stdout:============================================================================== 2026-03-09T14:13:51.227 INFO:teuthology.orchestra.run.vm04.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:13:51.227 INFO:teuthology.orchestra.run.vm04.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:13:51.227 INFO:teuthology.orchestra.run.vm04.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:13:51.227 INFO:teuthology.orchestra.run.vm04.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:13:51.227 INFO:teuthology.orchestra.run.vm04.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:13:52.243 INFO:teuthology.orchestra.run.vm05.stdout: 9 Mar 14:13:52 ntpd[16171]: ntpd: time slew -0.007596 s 2026-03-09T14:13:52.243 INFO:teuthology.orchestra.run.vm05.stdout:ntpd: time slew -0.007596s 2026-03-09T14:13:52.264 INFO:teuthology.orchestra.run.vm05.stdout: remote refid st t when poll reach delay offset jitter 2026-03-09T14:13:52.264 INFO:teuthology.orchestra.run.vm05.stdout:============================================================================== 2026-03-09T14:13:52.264 INFO:teuthology.orchestra.run.vm05.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:13:52.264 INFO:teuthology.orchestra.run.vm05.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:13:52.264 INFO:teuthology.orchestra.run.vm05.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:13:52.264 INFO:teuthology.orchestra.run.vm05.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:13:52.264 INFO:teuthology.orchestra.run.vm05.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:13:52.265 INFO:teuthology.run_tasks:Running task cephadm... 2026-03-09T14:13:52.308 INFO:tasks.cephadm:Config: {'conf': {'global': {'mon warn on pool no app': False}, 'mgr': {'debug mgr': 20, 'debug ms': 1}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'debug ms': 1, 'debug osd': 20, 'osd mclock iops capacity threshold hdd': 49000}}, 'flavor': 'default', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', 'MON_DOWN'], 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'} 2026-03-09T14:13:52.308 INFO:tasks.cephadm:Cluster image is quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T14:13:52.308 INFO:tasks.cephadm:Cluster fsid is 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:13:52.308 INFO:tasks.cephadm:Choosing monitor IPs and ports... 2026-03-09T14:13:52.308 INFO:tasks.cephadm:Monitor IPs: {'mon.a': '192.168.123.103', 'mon.b': '192.168.123.104', 'mon.c': '192.168.123.105'} 2026-03-09T14:13:52.308 INFO:tasks.cephadm:First mon is mon.a on vm03 2026-03-09T14:13:52.308 INFO:tasks.cephadm:First mgr is x 2026-03-09T14:13:52.308 INFO:tasks.cephadm:Normalizing hostnames... 2026-03-09T14:13:52.308 DEBUG:teuthology.orchestra.run.vm03:> sudo hostname $(hostname -s) 2026-03-09T14:13:52.316 DEBUG:teuthology.orchestra.run.vm04:> sudo hostname $(hostname -s) 2026-03-09T14:13:52.323 DEBUG:teuthology.orchestra.run.vm05:> sudo hostname $(hostname -s) 2026-03-09T14:13:52.331 INFO:tasks.cephadm:Downloading "compiled" cephadm from cachra 2026-03-09T14:13:52.331 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T14:13:52.969 INFO:tasks.cephadm:builder_project result: [{'url': 'https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/', 'chacra_url': 'https://1.chacra.ceph.com/repos/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/', 'ref': 'squid', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'distro': 'ubuntu', 'distro_version': '22.04', 'distro_codename': 'jammy', 'modified': '2026-02-25 19:37:07.680480', 'status': 'ready', 'flavor': 'default', 'project': 'ceph', 'archs': ['x86_64'], 'extra': {'version': '19.2.3-678-ge911bdeb', 'package_manager_version': '19.2.3-678-ge911bdeb-1jammy', 'build_url': 'https://jenkins.ceph.com/job/ceph-dev-pipeline/3275/', 'root_build_cause': '', 'node_name': '10.20.192.98+toko08', 'job_name': 'ceph-dev-pipeline'}}] 2026-03-09T14:13:53.573 INFO:tasks.util.chacra:got chacra host 1.chacra.ceph.com, ref squid, sha1 e911bdebe5c8faa3800735d1568fcdca65db60df from https://shaman.ceph.com/api/search/?project=ceph&distros=ubuntu%2F22.04%2Fx86_64&flavor=default&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T14:13:53.575 INFO:tasks.cephadm:Discovered cachra url: https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm 2026-03-09T14:13:53.575 INFO:tasks.cephadm:Downloading cephadm from url: https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm 2026-03-09T14:13:53.575 DEBUG:teuthology.orchestra.run.vm03:> curl --silent -L https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-09T14:13:55.030 INFO:teuthology.orchestra.run.vm03.stdout:-rw-rw-r-- 1 ubuntu ubuntu 795696 Mar 9 14:13 /home/ubuntu/cephtest/cephadm 2026-03-09T14:13:55.030 DEBUG:teuthology.orchestra.run.vm04:> curl --silent -L https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-09T14:13:56.441 INFO:teuthology.orchestra.run.vm04.stdout:-rw-rw-r-- 1 ubuntu ubuntu 795696 Mar 9 14:13 /home/ubuntu/cephtest/cephadm 2026-03-09T14:13:56.442 DEBUG:teuthology.orchestra.run.vm05:> curl --silent -L https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-09T14:13:57.798 INFO:teuthology.orchestra.run.vm05.stdout:-rw-rw-r-- 1 ubuntu ubuntu 795696 Mar 9 14:13 /home/ubuntu/cephtest/cephadm 2026-03-09T14:13:57.798 DEBUG:teuthology.orchestra.run.vm03:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-09T14:13:57.802 DEBUG:teuthology.orchestra.run.vm04:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-09T14:13:57.806 DEBUG:teuthology.orchestra.run.vm05:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-09T14:13:57.816 INFO:tasks.cephadm:Pulling image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df on all hosts... 2026-03-09T14:13:57.816 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-09T14:13:57.845 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-09T14:13:57.850 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-09T14:13:57.944 INFO:teuthology.orchestra.run.vm04.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-09T14:13:57.947 INFO:teuthology.orchestra.run.vm03.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-09T14:13:57.952 INFO:teuthology.orchestra.run.vm05.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-09T14:15:02.064 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-09T14:15:02.064 INFO:teuthology.orchestra.run.vm05.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-09T14:15:02.064 INFO:teuthology.orchestra.run.vm05.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-09T14:15:02.064 INFO:teuthology.orchestra.run.vm05.stdout: "repo_digests": [ 2026-03-09T14:15:02.064 INFO:teuthology.orchestra.run.vm05.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-09T14:15:02.064 INFO:teuthology.orchestra.run.vm05.stdout: ] 2026-03-09T14:15:02.064 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-09T14:16:30.362 INFO:teuthology.orchestra.run.vm04.stdout:{ 2026-03-09T14:16:30.362 INFO:teuthology.orchestra.run.vm04.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-09T14:16:30.363 INFO:teuthology.orchestra.run.vm04.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-09T14:16:30.363 INFO:teuthology.orchestra.run.vm04.stdout: "repo_digests": [ 2026-03-09T14:16:30.363 INFO:teuthology.orchestra.run.vm04.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-09T14:16:30.363 INFO:teuthology.orchestra.run.vm04.stdout: ] 2026-03-09T14:16:30.363 INFO:teuthology.orchestra.run.vm04.stdout:} 2026-03-09T14:16:34.856 INFO:teuthology.orchestra.run.vm03.stdout:{ 2026-03-09T14:16:34.856 INFO:teuthology.orchestra.run.vm03.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-09T14:16:34.856 INFO:teuthology.orchestra.run.vm03.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-09T14:16:34.856 INFO:teuthology.orchestra.run.vm03.stdout: "repo_digests": [ 2026-03-09T14:16:34.856 INFO:teuthology.orchestra.run.vm03.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-09T14:16:34.856 INFO:teuthology.orchestra.run.vm03.stdout: ] 2026-03-09T14:16:34.856 INFO:teuthology.orchestra.run.vm03.stdout:} 2026-03-09T14:16:34.867 DEBUG:teuthology.orchestra.run.vm03:> sudo mkdir -p /etc/ceph 2026-03-09T14:16:34.875 DEBUG:teuthology.orchestra.run.vm04:> sudo mkdir -p /etc/ceph 2026-03-09T14:16:34.885 DEBUG:teuthology.orchestra.run.vm05:> sudo mkdir -p /etc/ceph 2026-03-09T14:16:34.893 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod 777 /etc/ceph 2026-03-09T14:16:34.924 DEBUG:teuthology.orchestra.run.vm04:> sudo chmod 777 /etc/ceph 2026-03-09T14:16:34.933 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod 777 /etc/ceph 2026-03-09T14:16:34.940 INFO:tasks.cephadm:Writing seed config... 2026-03-09T14:16:34.940 INFO:tasks.cephadm: override: [global] mon warn on pool no app = False 2026-03-09T14:16:34.940 INFO:tasks.cephadm: override: [mgr] debug mgr = 20 2026-03-09T14:16:34.940 INFO:tasks.cephadm: override: [mgr] debug ms = 1 2026-03-09T14:16:34.940 INFO:tasks.cephadm: override: [mon] debug mon = 20 2026-03-09T14:16:34.941 INFO:tasks.cephadm: override: [mon] debug ms = 1 2026-03-09T14:16:34.941 INFO:tasks.cephadm: override: [mon] debug paxos = 20 2026-03-09T14:16:34.941 INFO:tasks.cephadm: override: [osd] debug ms = 1 2026-03-09T14:16:34.941 INFO:tasks.cephadm: override: [osd] debug osd = 20 2026-03-09T14:16:34.941 INFO:tasks.cephadm: override: [osd] osd mclock iops capacity threshold hdd = 49000 2026-03-09T14:16:34.941 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T14:16:34.941 DEBUG:teuthology.orchestra.run.vm03:> dd of=/home/ubuntu/cephtest/seed.ceph.conf 2026-03-09T14:16:34.971 DEBUG:tasks.cephadm:Final config: [global] # make logging friendly to teuthology log_to_file = true log_to_stderr = false log to journald = false mon cluster log to file = true mon cluster log file level = debug mon clock drift allowed = 1.000 # replicate across OSDs, not hosts osd crush chooseleaf type = 0 #osd pool default size = 2 osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd # enable some debugging auth debug = true ms die on old message = true ms die on bug = true debug asserts on shutdown = true # adjust warnings mon max pg per osd = 10000# >= luminous mon pg warn max object skew = 0 mon osd allow primary affinity = true mon osd allow pg remap = true mon warn on legacy crush tunables = false mon warn on crush straw calc version zero = false mon warn on no sortbitwise = false mon warn on osd down out interval zero = false mon warn on too few osds = false mon_warn_on_pool_pg_num_not_power_of_two = false # disable pg_autoscaler by default for new pools osd_pool_default_pg_autoscale_mode = off # tests delete pools mon allow pool delete = true fsid = 3346de4a-1bc2-11f1-95ae-3796c8433614 mon warn on pool no app = False [osd] osd scrub load threshold = 5.0 osd scrub max interval = 600 osd mclock profile = high_recovery_ops osd recover clone overlap = true osd recovery max chunk = 1048576 osd deep scrub update digest min age = 30 osd map max advance = 10 osd memory target autotune = true # debugging osd debug shutdown = true osd debug op order = true osd debug verify stray on activate = true osd debug pg log writeout = true osd debug verify cached snaps = true osd debug verify missing on start = true osd debug misdirected ops = true osd op queue = debug_random osd op queue cut off = debug_random osd shutdown pgref assert = true bdev debug aio = true osd sloppy crc = true debug ms = 1 debug osd = 20 osd mclock iops capacity threshold hdd = 49000 [mgr] mon reweight min pgs per osd = 4 mon reweight min bytes per osd = 10 mgr/telemetry/nag = false debug mgr = 20 debug ms = 1 [mon] mon data avail warn = 5 mon mgr mkfs grace = 240 mon reweight min pgs per osd = 4 mon osd reporter subtree level = osd mon osd prime pg temp = true mon reweight min bytes per osd = 10 # rotate auth tickets quickly to exercise renewal paths auth mon ticket ttl = 660# 11m auth service ticket ttl = 240# 4m # don't complain about global id reclaim mon_warn_on_insecure_global_id_reclaim = false mon_warn_on_insecure_global_id_reclaim_allowed = false debug mon = 20 debug ms = 1 debug paxos = 20 [client.rgw] rgw cache enabled = true rgw enable ops log = true rgw enable usage log = true 2026-03-09T14:16:34.971 DEBUG:teuthology.orchestra.run.vm03:mon.a> sudo journalctl -f -n 0 -u ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@mon.a.service 2026-03-09T14:16:35.013 DEBUG:teuthology.orchestra.run.vm03:mgr.x> sudo journalctl -f -n 0 -u ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@mgr.x.service 2026-03-09T14:16:35.057 INFO:tasks.cephadm:Bootstrapping... 2026-03-09T14:16:35.057 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df -v bootstrap --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 192.168.123.103 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring 2026-03-09T14:16:35.197 INFO:teuthology.orchestra.run.vm03.stdout:-------------------------------------------------------------------------------- 2026-03-09T14:16:35.197 INFO:teuthology.orchestra.run.vm03.stdout:cephadm ['--image', 'quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df', '-v', 'bootstrap', '--fsid', '3346de4a-1bc2-11f1-95ae-3796c8433614', '--config', '/home/ubuntu/cephtest/seed.ceph.conf', '--output-config', '/etc/ceph/ceph.conf', '--output-keyring', '/etc/ceph/ceph.client.admin.keyring', '--output-pub-ssh-key', '/home/ubuntu/cephtest/ceph.pub', '--mon-id', 'a', '--mgr-id', 'x', '--orphan-initial-daemons', '--skip-monitoring-stack', '--mon-ip', '192.168.123.103', '--skip-admin-label'] 2026-03-09T14:16:35.197 INFO:teuthology.orchestra.run.vm03.stderr:Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts. 2026-03-09T14:16:35.197 INFO:teuthology.orchestra.run.vm03.stdout:Verifying podman|docker is present... 2026-03-09T14:16:35.197 INFO:teuthology.orchestra.run.vm03.stdout:Verifying lvm2 is present... 2026-03-09T14:16:35.197 INFO:teuthology.orchestra.run.vm03.stdout:Verifying time synchronization is in place... 2026-03-09T14:16:35.201 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-09T14:16:35.201 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-09T14:16:35.203 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-09T14:16:35.203 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout inactive 2026-03-09T14:16:35.206 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 1 from systemctl is-enabled chronyd.service 2026-03-09T14:16:35.206 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Failed to get unit file state for chronyd.service: No such file or directory 2026-03-09T14:16:35.208 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 3 from systemctl is-active chronyd.service 2026-03-09T14:16:35.208 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout inactive 2026-03-09T14:16:35.210 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 1 from systemctl is-enabled systemd-timesyncd.service 2026-03-09T14:16:35.210 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout masked 2026-03-09T14:16:35.212 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 3 from systemctl is-active systemd-timesyncd.service 2026-03-09T14:16:35.212 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout inactive 2026-03-09T14:16:35.215 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 1 from systemctl is-enabled ntpd.service 2026-03-09T14:16:35.215 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Failed to get unit file state for ntpd.service: No such file or directory 2026-03-09T14:16:35.217 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 3 from systemctl is-active ntpd.service 2026-03-09T14:16:35.217 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout inactive 2026-03-09T14:16:35.220 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout enabled 2026-03-09T14:16:35.222 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout active 2026-03-09T14:16:35.222 INFO:teuthology.orchestra.run.vm03.stdout:Unit ntp.service is enabled and running 2026-03-09T14:16:35.222 INFO:teuthology.orchestra.run.vm03.stdout:Repeating the final host check... 2026-03-09T14:16:35.223 INFO:teuthology.orchestra.run.vm03.stdout:docker (/usr/bin/docker) is present 2026-03-09T14:16:35.223 INFO:teuthology.orchestra.run.vm03.stdout:systemctl is present 2026-03-09T14:16:35.223 INFO:teuthology.orchestra.run.vm03.stdout:lvcreate is present 2026-03-09T14:16:35.225 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-09T14:16:35.225 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-09T14:16:35.228 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-09T14:16:35.228 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout inactive 2026-03-09T14:16:35.231 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 1 from systemctl is-enabled chronyd.service 2026-03-09T14:16:35.231 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Failed to get unit file state for chronyd.service: No such file or directory 2026-03-09T14:16:35.233 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 3 from systemctl is-active chronyd.service 2026-03-09T14:16:35.233 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout inactive 2026-03-09T14:16:35.236 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 1 from systemctl is-enabled systemd-timesyncd.service 2026-03-09T14:16:35.236 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout masked 2026-03-09T14:16:35.239 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 3 from systemctl is-active systemd-timesyncd.service 2026-03-09T14:16:35.239 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout inactive 2026-03-09T14:16:35.242 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 1 from systemctl is-enabled ntpd.service 2026-03-09T14:16:35.242 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Failed to get unit file state for ntpd.service: No such file or directory 2026-03-09T14:16:35.244 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 3 from systemctl is-active ntpd.service 2026-03-09T14:16:35.244 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout inactive 2026-03-09T14:16:35.247 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout enabled 2026-03-09T14:16:35.249 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout active 2026-03-09T14:16:35.249 INFO:teuthology.orchestra.run.vm03.stdout:Unit ntp.service is enabled and running 2026-03-09T14:16:35.249 INFO:teuthology.orchestra.run.vm03.stdout:Host looks OK 2026-03-09T14:16:35.249 INFO:teuthology.orchestra.run.vm03.stdout:Cluster fsid: 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:16:35.249 INFO:teuthology.orchestra.run.vm03.stdout:Acquiring lock 140693760313952 on /run/cephadm/3346de4a-1bc2-11f1-95ae-3796c8433614.lock 2026-03-09T14:16:35.250 INFO:teuthology.orchestra.run.vm03.stdout:Lock 140693760313952 acquired on /run/cephadm/3346de4a-1bc2-11f1-95ae-3796c8433614.lock 2026-03-09T14:16:35.250 INFO:teuthology.orchestra.run.vm03.stdout:Verifying IP 192.168.123.103 port 3300 ... 2026-03-09T14:16:35.250 INFO:teuthology.orchestra.run.vm03.stdout:Verifying IP 192.168.123.103 port 6789 ... 2026-03-09T14:16:35.250 INFO:teuthology.orchestra.run.vm03.stdout:Base mon IP(s) is [192.168.123.103:3300, 192.168.123.103:6789], mon addrv is [v2:192.168.123.103:3300,v1:192.168.123.103:6789] 2026-03-09T14:16:35.251 INFO:teuthology.orchestra.run.vm03.stdout:/usr/sbin/ip: stdout default via 192.168.123.1 dev ens3 proto dhcp src 192.168.123.103 metric 100 2026-03-09T14:16:35.251 INFO:teuthology.orchestra.run.vm03.stdout:/usr/sbin/ip: stdout 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 2026-03-09T14:16:35.251 INFO:teuthology.orchestra.run.vm03.stdout:/usr/sbin/ip: stdout 192.168.123.0/24 dev ens3 proto kernel scope link src 192.168.123.103 metric 100 2026-03-09T14:16:35.252 INFO:teuthology.orchestra.run.vm03.stdout:/usr/sbin/ip: stdout 192.168.123.1 dev ens3 proto dhcp scope link src 192.168.123.103 metric 100 2026-03-09T14:16:35.253 INFO:teuthology.orchestra.run.vm03.stdout:/usr/sbin/ip: stdout ::1 dev lo proto kernel metric 256 pref medium 2026-03-09T14:16:35.253 INFO:teuthology.orchestra.run.vm03.stdout:/usr/sbin/ip: stdout fe80::/64 dev ens3 proto kernel metric 256 pref medium 2026-03-09T14:16:35.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/sbin/ip: stdout 1: lo: mtu 65536 state UNKNOWN qlen 1000 2026-03-09T14:16:35.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/sbin/ip: stdout inet6 ::1/128 scope host 2026-03-09T14:16:35.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-09T14:16:35.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/sbin/ip: stdout 2: ens3: mtu 1500 state UP qlen 1000 2026-03-09T14:16:35.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/sbin/ip: stdout inet6 fe80::5055:ff:fe00:3/64 scope link 2026-03-09T14:16:35.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-09T14:16:35.254 INFO:teuthology.orchestra.run.vm03.stdout:Mon IP `192.168.123.103` is in CIDR network `192.168.123.0/24` 2026-03-09T14:16:35.254 INFO:teuthology.orchestra.run.vm03.stdout:Mon IP `192.168.123.103` is in CIDR network `192.168.123.0/24` 2026-03-09T14:16:35.254 INFO:teuthology.orchestra.run.vm03.stdout:Mon IP `192.168.123.103` is in CIDR network `192.168.123.1/32` 2026-03-09T14:16:35.254 INFO:teuthology.orchestra.run.vm03.stdout:Mon IP `192.168.123.103` is in CIDR network `192.168.123.1/32` 2026-03-09T14:16:35.254 INFO:teuthology.orchestra.run.vm03.stdout:Inferred mon public CIDR from local network configuration ['192.168.123.0/24', '192.168.123.0/24', '192.168.123.1/32', '192.168.123.1/32'] 2026-03-09T14:16:35.254 INFO:teuthology.orchestra.run.vm03.stdout:Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network 2026-03-09T14:16:35.255 INFO:teuthology.orchestra.run.vm03.stdout:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-09T14:16:36.245 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/docker: stdout e911bdebe5c8faa3800735d1568fcdca65db60df: Pulling from ceph-ci/ceph 2026-03-09T14:16:36.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/docker: stdout Digest: sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T14:16:36.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/docker: stdout Status: Image is up to date for quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T14:16:36.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/docker: stdout quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T14:16:36.431 INFO:teuthology.orchestra.run.vm03.stdout:ceph: stdout ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-09T14:16:36.432 INFO:teuthology.orchestra.run.vm03.stdout:Ceph version: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-09T14:16:36.432 INFO:teuthology.orchestra.run.vm03.stdout:Extracting ceph user uid/gid from container image... 2026-03-09T14:16:36.535 INFO:teuthology.orchestra.run.vm03.stdout:stat: stdout 167 167 2026-03-09T14:16:36.535 INFO:teuthology.orchestra.run.vm03.stdout:Creating initial keys... 2026-03-09T14:16:36.657 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-authtool: stdout AQBE1q5prtBYJRAAIPWh07dSvixtLtDamDHLQA== 2026-03-09T14:16:36.762 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-authtool: stdout AQBE1q5pO/3dKxAAA8WeRylONp81Ve9DFlZCog== 2026-03-09T14:16:36.853 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-authtool: stdout AQBE1q5pHE1BMRAAFyAM9T2iuNWz8k5bhqLzFA== 2026-03-09T14:16:36.854 INFO:teuthology.orchestra.run.vm03.stdout:Creating initial monmap... 2026-03-09T14:16:36.963 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-09T14:16:36.963 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/monmaptool: stdout setting min_mon_release = quincy 2026-03-09T14:16:36.963 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: set fsid to 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:16:36.963 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-09T14:16:36.963 INFO:teuthology.orchestra.run.vm03.stdout:monmaptool for a [v2:192.168.123.103:3300,v1:192.168.123.103:6789] on /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-09T14:16:36.963 INFO:teuthology.orchestra.run.vm03.stdout:setting min_mon_release = quincy 2026-03-09T14:16:36.963 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/monmaptool: set fsid to 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:16:36.963 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-09T14:16:36.963 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:16:36.963 INFO:teuthology.orchestra.run.vm03.stdout:Creating mon... 2026-03-09T14:16:37.097 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-09T14:16:37.097 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 1 imported monmap: 2026-03-09T14:16:37.097 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr epoch 0 2026-03-09T14:16:37.097 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:16:37.097 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr last_changed 2026-03-09T14:16:36.936076+0000 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr created 2026-03-09T14:16:36.936076+0000 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr min_mon_release 17 (quincy) 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr election_strategy: 1 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 0 /usr/bin/ceph-mon: set fsid to 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: RocksDB version: 7.9.2 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Git sha 0 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: DB SUMMARY 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: DB Session ID: CXRV6WVEOP50V8DBO4C1 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 0, files: 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.error_if_exists: 0 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.create_if_missing: 1 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.paranoid_checks: 1 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.env: 0x55e805fc5dc0 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.info_log: 0x55e81f5a6da0 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.statistics: (nil) 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.use_fsync: 0 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.max_log_file_size: 0 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.allow_fallocate: 1 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.use_direct_reads: 0 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.db_log_dir: 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.wal_dir: 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.write_buffer_manager: 0x55e81f59d5e0 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-09T14:16:37.098 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.unordered_write: 0 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.row_cache: None 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.wal_filter: None 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.two_write_queues: 0 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.wal_compression: 0 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.atomic_flush: 0 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.log_readahead_size: 0 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.max_background_jobs: 2 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.max_background_compactions: -1 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.max_subcompactions: 1 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.max_open_files: -1 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Options.max_background_flushes: -1 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Compression algorithms supported: 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: kZSTD supported: 0 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: kXpressCompression supported: 0 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: kBZip2Compression supported: 0 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: kLZ4Compression supported: 1 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: kZlibCompression supported: 1 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: kSnappyCompression supported: 1 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.043+0000 7f5c9aff9d80 4 rocksdb: [db/db_impl/db_impl_open.cc:317] Creating manifest 1 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000001 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.merge_operator: 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.compaction_filter: None 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T14:16:37.099 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e81f599520) 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr cache_index_and_filter_blocks: 1 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr pin_top_level_index_and_filter: 1 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr index_type: 0 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr data_block_index_type: 0 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr index_shortening: 1 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr data_block_hash_table_util_ratio: 0.750000 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr checksum: 4 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr no_block_cache: 0 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr block_cache: 0x55e81f5bf350 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr block_cache_name: BinnedLRUCache 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr block_cache_options: 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr capacity : 536870912 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr num_shard_bits : 4 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr strict_capacity_limit : 0 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr high_pri_pool_ratio: 0.000 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr block_cache_compressed: (nil) 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr persistent_cache: (nil) 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr block_size: 4096 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr block_size_deviation: 10 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr block_restart_interval: 16 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr index_block_restart_interval: 1 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr metadata_block_size: 4096 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr partition_filters: 0 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr use_delta_encoding: 1 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr filter_policy: bloomfilter 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr whole_key_filtering: 1 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr verify_compression: 0 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr read_amp_bytes_per_bit: 0 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr format_version: 5 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr enable_index_compression: 1 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr block_align: 0 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr max_auto_readahead_size: 262144 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr prepopulate_block_cache: 0 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr initial_auto_readahead_size: 8192 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr num_file_reads_for_auto_readahead: 2 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.compression: NoCompression 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.num_levels: 7 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T14:16:37.100 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.inplace_update_support: 0 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.bloom_locality: 0 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.max_successive_merges: 0 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.ttl: 2592000 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.enable_blob_files: false 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.min_blob_size: 0 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000001 succeeded,manifest_file_number is 1, next_file_number is 3, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T14:16:37.101 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 96d5d84a-0e58-405e-93ed-eb7d24e38567 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.047+0000 7f5c9aff9d80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 5 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.051+0000 7f5c9aff9d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55e81f5c0e00 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.051+0000 7f5c9aff9d80 4 rocksdb: DB pointer 0x55e81f6a4000 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.051+0000 7f5c92783640 4 rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.051+0000 7f5c92783640 4 rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr ** DB Stats ** 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr ** Compaction Stats [default] ** 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr ** Compaction Stats [default] ** 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Flush(GB): cumulative 0.000, interval 0.000 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr AddFile(GB): cumulative 0.000, interval 0.000 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr AddFile(Total Files): cumulative 0, interval 0 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr AddFile(L0 Files): cumulative 0, interval 0 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr AddFile(Keys): cumulative 0, interval 0 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Block cache BinnedLRUCache@0x55e81f5bf350#7 capacity: 512.00 MB usage: 0.00 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 8e-06 secs_since: 0 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Block cache entry stats(count,size,portion): Misc(1,0.00 KB,0%) 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr ** File Read Latency Histogram By Level [default] ** 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.055+0000 7f5c9aff9d80 4 rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.055+0000 7f5c9aff9d80 4 rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T14:16:37.055+0000 7f5c9aff9d80 0 /usr/bin/ceph-mon: created monfs at /var/lib/ceph/mon/ceph-a for mon.a 2026-03-09T14:16:37.102 INFO:teuthology.orchestra.run.vm03.stdout:create mon.a on 2026-03-09T14:16:37.469 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /etc/systemd/system/ceph.target. 2026-03-09T14:16:37.646 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614.target → /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614.target. 2026-03-09T14:16:37.646 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph.target.wants/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614.target → /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614.target. 2026-03-09T14:16:37.840 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:37 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:16:37.843 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@mon.a 2026-03-09T14:16:37.844 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Failed to reset failed state of unit ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@mon.a.service: Unit ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@mon.a.service not loaded. 2026-03-09T14:16:38.033 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614.target.wants/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@mon.a.service → /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service. 2026-03-09T14:16:38.042 INFO:teuthology.orchestra.run.vm03.stdout:firewalld does not appear to be present 2026-03-09T14:16:38.042 INFO:teuthology.orchestra.run.vm03.stdout:Not possible to enable service . firewalld.service is not available 2026-03-09T14:16:38.042 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for mon to start... 2026-03-09T14:16:38.042 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for mon... 2026-03-09T14:16:38.107 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:37 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:16:38.107 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:38 vm03 systemd[1]: Started Ceph mon.a for 3346de4a-1bc2-11f1-95ae-3796c8433614. 2026-03-09T14:16:38.472 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:38 vm03 bash[17051]: cluster 2026-03-09T14:16:38.209592+0000 mon.a (mon.0) 0 : cluster [INF] mkfs 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:16:38.472 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:38 vm03 bash[17051]: cluster 2026-03-09T14:16:38.209592+0000 mon.a (mon.0) 0 : cluster [INF] mkfs 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:16:38.472 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:38 vm03 bash[17051]: cluster 2026-03-09T14:16:38.202207+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T14:16:38.472 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:38 vm03 bash[17051]: cluster 2026-03-09T14:16:38.202207+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T14:16:38.503 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout cluster: 2026-03-09T14:16:38.503 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout id: 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:16:38.503 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout health: HEALTH_OK 2026-03-09T14:16:38.503 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T14:16:38.503 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout services: 2026-03-09T14:16:38.503 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon: 1 daemons, quorum a (age 0.25152s) 2026-03-09T14:16:38.503 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mgr: no daemons active 2026-03-09T14:16:38.503 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout osd: 0 osds: 0 up, 0 in 2026-03-09T14:16:38.503 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T14:16:38.503 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout data: 2026-03-09T14:16:38.503 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout pools: 0 pools, 0 pgs 2026-03-09T14:16:38.503 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout objects: 0 objects, 0 B 2026-03-09T14:16:38.503 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout usage: 0 B used, 0 B / 0 B avail 2026-03-09T14:16:38.503 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout pgs: 2026-03-09T14:16:38.503 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T14:16:38.503 INFO:teuthology.orchestra.run.vm03.stdout:mon is available 2026-03-09T14:16:38.503 INFO:teuthology.orchestra.run.vm03.stdout:Assimilating anything we can from ceph.conf... 2026-03-09T14:16:38.711 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T14:16:38.711 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout [global] 2026-03-09T14:16:38.711 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout fsid = 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:16:38.711 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-09T14:16:38.711 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.103:3300,v1:192.168.123.103:6789] 2026-03-09T14:16:38.711 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-09T14:16:38.711 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-09T14:16:38.711 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-09T14:16:38.711 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-09T14:16:38.711 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T14:16:38.712 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-09T14:16:38.712 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-09T14:16:38.712 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T14:16:38.712 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout [osd] 2026-03-09T14:16:38.712 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-09T14:16:38.712 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-09T14:16:38.712 INFO:teuthology.orchestra.run.vm03.stdout:Generating new minimal ceph.conf... 2026-03-09T14:16:38.907 INFO:teuthology.orchestra.run.vm03.stdout:Restarting the monitor... 2026-03-09T14:16:39.009 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:38 vm03 systemd[1]: Stopping Ceph mon.a for 3346de4a-1bc2-11f1-95ae-3796c8433614... 2026-03-09T14:16:39.009 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:38 vm03 bash[17051]: debug 2026-03-09T14:16:38.939+0000 7effdf553640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-09T14:16:39.009 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:38 vm03 bash[17051]: debug 2026-03-09T14:16:38.939+0000 7effdf553640 -1 mon.a@0(leader) e1 *** Got Signal Terminated *** 2026-03-09T14:16:39.077 INFO:teuthology.orchestra.run.vm03.stdout:Setting public_network to 192.168.123.1/32,192.168.123.0/24 in mon config section 2026-03-09T14:16:39.283 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17438]: ceph-3346de4a-1bc2-11f1-95ae-3796c8433614-mon-a 2026-03-09T14:16:39.283 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 systemd[1]: ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@mon.a.service: Deactivated successfully. 2026-03-09T14:16:39.283 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 systemd[1]: Stopped Ceph mon.a for 3346de4a-1bc2-11f1-95ae-3796c8433614. 2026-03-09T14:16:39.283 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 systemd[1]: Started Ceph mon.a for 3346de4a-1bc2-11f1-95ae-3796c8433614. 2026-03-09T14:16:39.283 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.195+0000 7f20d822ed80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-09T14:16:39.283 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.195+0000 7f20d822ed80 0 ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 7 2026-03-09T14:16:39.283 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.195+0000 7f20d822ed80 0 pidfile_write: ignore empty --pid-file 2026-03-09T14:16:39.283 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 0 load: jerasure load: lrc 2026-03-09T14:16:39.283 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: RocksDB version: 7.9.2 2026-03-09T14:16:39.283 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Git sha 0 2026-03-09T14:16:39.283 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-09T14:16:39.283 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: DB SUMMARY 2026-03-09T14:16:39.283 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: DB Session ID: RSU6929FFRULJCA5TQJZ 2026-03-09T14:16:39.283 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: CURRENT file: CURRENT 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: MANIFEST file: MANIFEST-000010 size: 179 Bytes 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 1, files: 000008.sst 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 000009.log size: 75491 ; 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.error_if_exists: 0 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.create_if_missing: 0 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.paranoid_checks: 1 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.env: 0x561b010d3dc0 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.info_log: 0x561b28992700 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.statistics: (nil) 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.use_fsync: 0 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.max_log_file_size: 0 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.allow_fallocate: 1 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.use_direct_reads: 0 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.db_log_dir: 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.wal_dir: 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.write_buffer_manager: 0x561b28997900 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-09T14:16:39.284 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.unordered_write: 0 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.row_cache: None 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.wal_filter: None 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.two_write_queues: 0 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.wal_compression: 0 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.atomic_flush: 0 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.log_readahead_size: 0 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.max_background_jobs: 2 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.max_background_compactions: -1 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.max_subcompactions: 1 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.max_open_files: -1 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.max_background_flushes: -1 2026-03-09T14:16:39.285 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Compression algorithms supported: 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: kZSTD supported: 0 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: kXpressCompression supported: 0 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: kBZip2Compression supported: 0 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: kLZ4Compression supported: 1 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: kZlibCompression supported: 1 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: kSnappyCompression supported: 1 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.merge_operator: 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.compaction_filter: None 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561b28992640) 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: cache_index_and_filter_blocks: 1 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: pin_top_level_index_and_filter: 1 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: index_type: 0 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: data_block_index_type: 0 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: index_shortening: 1 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: data_block_hash_table_util_ratio: 0.750000 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: checksum: 4 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: no_block_cache: 0 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: block_cache: 0x561b289b9350 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: block_cache_name: BinnedLRUCache 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: block_cache_options: 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: capacity : 536870912 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: num_shard_bits : 4 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: strict_capacity_limit : 0 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: high_pri_pool_ratio: 0.000 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: block_cache_compressed: (nil) 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: persistent_cache: (nil) 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: block_size: 4096 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: block_size_deviation: 10 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: block_restart_interval: 16 2026-03-09T14:16:39.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: index_block_restart_interval: 1 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: metadata_block_size: 4096 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: partition_filters: 0 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: use_delta_encoding: 1 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: filter_policy: bloomfilter 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: whole_key_filtering: 1 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: verify_compression: 0 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: read_amp_bytes_per_bit: 0 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: format_version: 5 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: enable_index_compression: 1 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: block_align: 0 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: max_auto_readahead_size: 262144 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: prepopulate_block_cache: 0 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: initial_auto_readahead_size: 8192 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: num_file_reads_for_auto_readahead: 2 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.compression: NoCompression 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.num_levels: 7 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T14:16:39.287 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.inplace_update_support: 0 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.bloom_locality: 0 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.max_successive_merges: 0 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.ttl: 2592000 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-09T14:16:39.288 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.enable_blob_files: false 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.min_blob_size: 0 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.199+0000 7f20d822ed80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.207+0000 7f20d822ed80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.207+0000 7f20d822ed80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.207+0000 7f20d822ed80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 96d5d84a-0e58-405e-93ed-eb7d24e38567 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.207+0000 7f20d822ed80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773065799212857, "job": 1, "event": "recovery_started", "wal_files": [9]} 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.207+0000 7f20d822ed80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.211+0000 7f20d822ed80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773065799214617, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 72561, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 225, "table_properties": {"data_size": 70836, "index_size": 178, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 517, "raw_key_size": 9693, "raw_average_key_size": 49, "raw_value_size": 65342, "raw_average_value_size": 333, "num_data_blocks": 8, "num_entries": 196, "num_filter_entries": 196, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773065799, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "96d5d84a-0e58-405e-93ed-eb7d24e38567", "db_session_id": "RSU6929FFRULJCA5TQJZ", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}} 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.211+0000 7f20d822ed80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773065799214691, "job": 1, "event": "recovery_finished"} 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.211+0000 7f20d822ed80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 15 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.215+0000 7f20d822ed80 4 rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-a/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.215+0000 7f20d822ed80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x561b289bae00 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.215+0000 7f20d822ed80 4 rocksdb: DB pointer 0x561b28ad0000 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.215+0000 7f20cdff8640 4 rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.215+0000 7f20cdff8640 4 rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: ** DB Stats ** 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: ** Compaction Stats [default] ** 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: L0 2/0 72.72 KB 0.5 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 44.3 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: Sum 2/0 72.72 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 44.3 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 44.3 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: ** Compaction Stats [default] ** 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 44.3 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: Flush(GB): cumulative 0.000, interval 0.000 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: AddFile(Total Files): cumulative 0, interval 0 2026-03-09T14:16:39.289 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: AddFile(L0 Files): cumulative 0, interval 0 2026-03-09T14:16:39.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: AddFile(Keys): cumulative 0, interval 0 2026-03-09T14:16:39.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: Cumulative compaction: 0.00 GB write, 4.62 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T14:16:39.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: Interval compaction: 0.00 GB write, 4.62 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T14:16:39.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-09T14:16:39.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: Block cache BinnedLRUCache@0x561b289b9350#7 capacity: 512.00 MB usage: 1.08 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.3e-05 secs_since: 0 2026-03-09T14:16:39.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: Block cache entry stats(count,size,portion): FilterBlock(2,0.70 KB,0.00013411%) IndexBlock(2,0.38 KB,7.15256e-05%) Misc(1,0.00 KB,0%) 2026-03-09T14:16:39.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: ** File Read Latency Histogram By Level [default] ** 2026-03-09T14:16:39.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.215+0000 7f20d822ed80 0 starting mon.a rank 0 at public addrs [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] at bind addrs [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon_data /var/lib/ceph/mon/ceph-a fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:16:39.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.215+0000 7f20d822ed80 1 mon.a@-1(???) e1 preinit fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:16:39.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.215+0000 7f20d822ed80 5 mon.a@-1(???).mds e0 Unable to load 'last_metadata' 2026-03-09T14:16:39.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.215+0000 7f20d822ed80 5 mon.a@-1(???).mds e0 Unable to load 'last_metadata' 2026-03-09T14:16:39.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.215+0000 7f20d822ed80 0 mon.a@-1(???).mds e1 new map 2026-03-09T14:16:39.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.215+0000 7f20d822ed80 0 mon.a@-1(???).mds e1 print_map 2026-03-09T14:16:39.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: e1 2026-03-09T14:16:39.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: btime 2026-03-09T14:16:38:208797+0000 2026-03-09T14:16:39.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T14:16:39.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-09T14:16:39.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: legacy client fscid: -1 2026-03-09T14:16:39.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: 2026-03-09T14:16:39.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: No filesystems configured 2026-03-09T14:16:39.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.215+0000 7f20d822ed80 0 mon.a@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-09T14:16:39.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.215+0000 7f20d822ed80 0 mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T14:16:39.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.215+0000 7f20d822ed80 0 mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T14:16:39.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.215+0000 7f20d822ed80 0 mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T14:16:39.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.215+0000 7f20d822ed80 1 mon.a@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3 2026-03-09T14:16:39.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.215+0000 7f20d822ed80 4 mon.a@-1(???).mgr e0 loading version 1 2026-03-09T14:16:39.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.215+0000 7f20d822ed80 4 mon.a@-1(???).mgr e1 active server: (0) 2026-03-09T14:16:39.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: debug 2026-03-09T14:16:39.215+0000 7f20d822ed80 4 mon.a@-1(???).mgr e1 mkfs or daemon transitioned to available, loading commands 2026-03-09T14:16:39.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: cluster 2026-03-09T14:16:39.224957+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T14:16:39.334 INFO:teuthology.orchestra.run.vm03.stdout:Wrote config to /etc/ceph/ceph.conf 2026-03-09T14:16:39.334 INFO:teuthology.orchestra.run.vm03.stdout:Wrote keyring to /etc/ceph/ceph.client.admin.keyring 2026-03-09T14:16:39.335 INFO:teuthology.orchestra.run.vm03.stdout:Creating mgr... 2026-03-09T14:16:39.335 INFO:teuthology.orchestra.run.vm03.stdout:Verifying port 0.0.0.0:9283 ... 2026-03-09T14:16:39.335 INFO:teuthology.orchestra.run.vm03.stdout:Verifying port 0.0.0.0:8765 ... 2026-03-09T14:16:39.525 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@mgr.x 2026-03-09T14:16:39.526 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Failed to reset failed state of unit ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@mgr.x.service: Unit ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@mgr.x.service not loaded. 2026-03-09T14:16:39.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: cluster 2026-03-09T14:16:39.224957+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T14:16:39.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: cluster 2026-03-09T14:16:39.225007+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-09T14:16:39.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: cluster 2026-03-09T14:16:39.225007+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-09T14:16:39.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: cluster 2026-03-09T14:16:39.225013+0000 mon.a (mon.0) 3 : cluster [DBG] fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:16:39.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: cluster 2026-03-09T14:16:39.225013+0000 mon.a (mon.0) 3 : cluster [DBG] fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:16:39.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: cluster 2026-03-09T14:16:39.225016+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-09T14:16:36.936076+0000 2026-03-09T14:16:39.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: cluster 2026-03-09T14:16:39.225016+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-09T14:16:36.936076+0000 2026-03-09T14:16:39.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: cluster 2026-03-09T14:16:39.225027+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-09T14:16:36.936076+0000 2026-03-09T14:16:39.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: cluster 2026-03-09T14:16:39.225027+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-09T14:16:36.936076+0000 2026-03-09T14:16:39.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: cluster 2026-03-09T14:16:39.225030+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T14:16:39.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: cluster 2026-03-09T14:16:39.225030+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T14:16:39.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: cluster 2026-03-09T14:16:39.225036+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-09T14:16:39.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: cluster 2026-03-09T14:16:39.225036+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-09T14:16:39.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: cluster 2026-03-09T14:16:39.225039+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T14:16:39.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: cluster 2026-03-09T14:16:39.225039+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T14:16:39.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: cluster 2026-03-09T14:16:39.225308+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-09T14:16:39.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: cluster 2026-03-09T14:16:39.225308+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-09T14:16:39.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: cluster 2026-03-09T14:16:39.225325+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-09T14:16:39.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: cluster 2026-03-09T14:16:39.225325+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-09T14:16:39.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: cluster 2026-03-09T14:16:39.225837+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-09T14:16:39.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 bash[17524]: cluster 2026-03-09T14:16:39.225837+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-09T14:16:39.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:16:39.707 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614.target.wants/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@mgr.x.service → /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service. 2026-03-09T14:16:39.719 INFO:teuthology.orchestra.run.vm03.stdout:firewalld does not appear to be present 2026-03-09T14:16:39.719 INFO:teuthology.orchestra.run.vm03.stdout:Not possible to enable service . firewalld.service is not available 2026-03-09T14:16:39.719 INFO:teuthology.orchestra.run.vm03.stdout:firewalld does not appear to be present 2026-03-09T14:16:39.719 INFO:teuthology.orchestra.run.vm03.stdout:Not possible to open ports <[9283, 8765]>. firewalld.service is not available 2026-03-09T14:16:39.719 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for mgr to start... 2026-03-09T14:16:39.719 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for mgr... 2026-03-09T14:16:39.952 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:39 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:16:39.980 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T14:16:39.980 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-09T14:16:39.980 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "fsid": "3346de4a-1bc2-11f1-95ae-3796c8433614", 2026-03-09T14:16:39.980 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "health": { 2026-03-09T14:16:39.980 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-09T14:16:39.980 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-09T14:16:39.980 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-09T14:16:39.980 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:16:39.980 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 0 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "a" 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum_age": 0, 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "btime": "2026-03-09T14:16:38:208797+0000", 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "restful" 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "modified": "2026-03-09T14:16:38.210155+0000", 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-09T14:16:39.981 INFO:teuthology.orchestra.run.vm03.stdout:mgr not available, waiting (1/15)... 2026-03-09T14:16:40.298 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:40 vm03 bash[17796]: debug 2026-03-09T14:16:40.123+0000 7f1640a55140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T14:16:40.557 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:40 vm03 bash[17796]: debug 2026-03-09T14:16:40.443+0000 7f1640a55140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T14:16:40.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:40 vm03 bash[17524]: audit 2026-03-09T14:16:39.289533+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.103:0/1977463852' entity='client.admin' 2026-03-09T14:16:40.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:40 vm03 bash[17524]: audit 2026-03-09T14:16:39.289533+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.103:0/1977463852' entity='client.admin' 2026-03-09T14:16:40.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:40 vm03 bash[17524]: audit 2026-03-09T14:16:39.937384+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.103:0/81453206' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T14:16:40.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:40 vm03 bash[17524]: audit 2026-03-09T14:16:39.937384+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.103:0/81453206' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T14:16:41.307 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:40 vm03 bash[17796]: debug 2026-03-09T14:16:40.947+0000 7f1640a55140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T14:16:41.307 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:41 vm03 bash[17796]: debug 2026-03-09T14:16:41.035+0000 7f1640a55140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T14:16:41.307 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:41 vm03 bash[17796]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T14:16:41.307 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:41 vm03 bash[17796]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T14:16:41.307 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:41 vm03 bash[17796]: from numpy import show_config as show_numpy_config 2026-03-09T14:16:41.307 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:41 vm03 bash[17796]: debug 2026-03-09T14:16:41.175+0000 7f1640a55140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T14:16:41.807 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:41 vm03 bash[17796]: debug 2026-03-09T14:16:41.315+0000 7f1640a55140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T14:16:41.807 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:41 vm03 bash[17796]: debug 2026-03-09T14:16:41.359+0000 7f1640a55140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T14:16:41.807 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:41 vm03 bash[17796]: debug 2026-03-09T14:16:41.399+0000 7f1640a55140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T14:16:41.807 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:41 vm03 bash[17796]: debug 2026-03-09T14:16:41.443+0000 7f1640a55140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T14:16:41.807 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:41 vm03 bash[17796]: debug 2026-03-09T14:16:41.499+0000 7f1640a55140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T14:16:42.220 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:41 vm03 bash[17796]: debug 2026-03-09T14:16:41.955+0000 7f1640a55140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T14:16:42.220 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:42 vm03 bash[17796]: debug 2026-03-09T14:16:42.007+0000 7f1640a55140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T14:16:42.220 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:42 vm03 bash[17796]: debug 2026-03-09T14:16:42.059+0000 7f1640a55140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T14:16:42.256 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T14:16:42.256 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-09T14:16:42.256 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "fsid": "3346de4a-1bc2-11f1-95ae-3796c8433614", 2026-03-09T14:16:42.256 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "health": { 2026-03-09T14:16:42.256 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-09T14:16:42.256 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-09T14:16:42.256 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-09T14:16:42.256 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:16:42.256 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-09T14:16:42.256 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-09T14:16:42.256 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 0 2026-03-09T14:16:42.256 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-09T14:16:42.256 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-09T14:16:42.256 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "a" 2026-03-09T14:16:42.256 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-09T14:16:42.256 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum_age": 2, 2026-03-09T14:16:42.256 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-09T14:16:42.256 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T14:16:42.256 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-09T14:16:42.256 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-09T14:16:42.256 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:16:42.256 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-09T14:16:42.256 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T14:16:42.256 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-09T14:16:42.256 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-09T14:16:42.256 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-09T14:16:42.256 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-09T14:16:42.256 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-09T14:16:42.256 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-09T14:16:42.256 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:16:42.256 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-09T14:16:42.257 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-09T14:16:42.257 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-09T14:16:42.257 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-09T14:16:42.257 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-09T14:16:42.257 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-09T14:16:42.257 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-09T14:16:42.257 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-09T14:16:42.257 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-09T14:16:42.257 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:16:42.257 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-09T14:16:42.257 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T14:16:42.257 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "btime": "2026-03-09T14:16:38:208797+0000", 2026-03-09T14:16:42.257 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-09T14:16:42.257 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-09T14:16:42.257 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:16:42.257 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-09T14:16:42.257 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-09T14:16:42.257 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-09T14:16:42.257 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-09T14:16:42.257 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-09T14:16:42.257 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-09T14:16:42.257 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "restful" 2026-03-09T14:16:42.257 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-09T14:16:42.257 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T14:16:42.257 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:16:42.257 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-09T14:16:42.257 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T14:16:42.257 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "modified": "2026-03-09T14:16:38.210155+0000", 2026-03-09T14:16:42.257 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T14:16:42.257 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:16:42.257 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-09T14:16:42.257 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-09T14:16:42.257 INFO:teuthology.orchestra.run.vm03.stdout:mgr not available, waiting (2/15)... 2026-03-09T14:16:42.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:42 vm03 bash[17524]: audit 2026-03-09T14:16:42.186417+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.103:0/2167707565' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T14:16:42.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:42 vm03 bash[17524]: audit 2026-03-09T14:16:42.186417+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.103:0/2167707565' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T14:16:42.557 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:42 vm03 bash[17796]: debug 2026-03-09T14:16:42.231+0000 7f1640a55140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T14:16:42.557 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:42 vm03 bash[17796]: debug 2026-03-09T14:16:42.291+0000 7f1640a55140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T14:16:42.557 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:42 vm03 bash[17796]: debug 2026-03-09T14:16:42.335+0000 7f1640a55140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T14:16:42.557 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:42 vm03 bash[17796]: debug 2026-03-09T14:16:42.451+0000 7f1640a55140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T14:16:42.889 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:42 vm03 bash[17796]: debug 2026-03-09T14:16:42.611+0000 7f1640a55140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T14:16:42.890 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:42 vm03 bash[17796]: debug 2026-03-09T14:16:42.795+0000 7f1640a55140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T14:16:42.890 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:42 vm03 bash[17796]: debug 2026-03-09T14:16:42.831+0000 7f1640a55140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T14:16:43.298 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:42 vm03 bash[17796]: debug 2026-03-09T14:16:42.883+0000 7f1640a55140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T14:16:43.298 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:43 vm03 bash[17796]: debug 2026-03-09T14:16:43.047+0000 7f1640a55140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T14:16:43.557 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:43 vm03 bash[17796]: debug 2026-03-09T14:16:43.291+0000 7f1640a55140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T14:16:43.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:43 vm03 bash[17524]: cluster 2026-03-09T14:16:43.296492+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon x 2026-03-09T14:16:43.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:43 vm03 bash[17524]: cluster 2026-03-09T14:16:43.296492+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon x 2026-03-09T14:16:43.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:43 vm03 bash[17524]: cluster 2026-03-09T14:16:43.300744+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: x(active, starting, since 0.00437062s) 2026-03-09T14:16:43.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:43 vm03 bash[17524]: cluster 2026-03-09T14:16:43.300744+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: x(active, starting, since 0.00437062s) 2026-03-09T14:16:43.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:43 vm03 bash[17524]: audit 2026-03-09T14:16:43.303504+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14101 192.168.123.103:0/2835702550' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T14:16:43.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:43 vm03 bash[17524]: audit 2026-03-09T14:16:43.303504+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14101 192.168.123.103:0/2835702550' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T14:16:43.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:43 vm03 bash[17524]: audit 2026-03-09T14:16:43.303608+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14101 192.168.123.103:0/2835702550' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T14:16:43.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:43 vm03 bash[17524]: audit 2026-03-09T14:16:43.303608+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14101 192.168.123.103:0/2835702550' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T14:16:43.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:43 vm03 bash[17524]: audit 2026-03-09T14:16:43.303698+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14101 192.168.123.103:0/2835702550' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T14:16:43.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:43 vm03 bash[17524]: audit 2026-03-09T14:16:43.303698+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14101 192.168.123.103:0/2835702550' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T14:16:43.558 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:43 vm03 bash[17524]: audit 2026-03-09T14:16:43.304110+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14101 192.168.123.103:0/2835702550' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:16:43.558 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:43 vm03 bash[17524]: audit 2026-03-09T14:16:43.304110+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14101 192.168.123.103:0/2835702550' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:16:43.558 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:43 vm03 bash[17524]: audit 2026-03-09T14:16:43.304728+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14101 192.168.123.103:0/2835702550' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T14:16:43.558 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:43 vm03 bash[17524]: audit 2026-03-09T14:16:43.304728+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14101 192.168.123.103:0/2835702550' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T14:16:43.558 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:43 vm03 bash[17524]: cluster 2026-03-09T14:16:43.309856+0000 mon.a (mon.0) 22 : cluster [INF] Manager daemon x is now available 2026-03-09T14:16:43.558 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:43 vm03 bash[17524]: cluster 2026-03-09T14:16:43.309856+0000 mon.a (mon.0) 22 : cluster [INF] Manager daemon x is now available 2026-03-09T14:16:43.558 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:43 vm03 bash[17524]: audit 2026-03-09T14:16:43.319910+0000 mon.a (mon.0) 23 : audit [INF] from='mgr.14101 192.168.123.103:0/2835702550' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:16:43.558 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:43 vm03 bash[17524]: audit 2026-03-09T14:16:43.319910+0000 mon.a (mon.0) 23 : audit [INF] from='mgr.14101 192.168.123.103:0/2835702550' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:16:43.558 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:43 vm03 bash[17524]: audit 2026-03-09T14:16:43.321581+0000 mon.a (mon.0) 24 : audit [INF] from='mgr.14101 192.168.123.103:0/2835702550' entity='mgr.x' 2026-03-09T14:16:43.558 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:43 vm03 bash[17524]: audit 2026-03-09T14:16:43.321581+0000 mon.a (mon.0) 24 : audit [INF] from='mgr.14101 192.168.123.103:0/2835702550' entity='mgr.x' 2026-03-09T14:16:43.558 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:43 vm03 bash[17524]: audit 2026-03-09T14:16:43.323814+0000 mon.a (mon.0) 25 : audit [INF] from='mgr.14101 192.168.123.103:0/2835702550' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-09T14:16:43.558 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:43 vm03 bash[17524]: audit 2026-03-09T14:16:43.323814+0000 mon.a (mon.0) 25 : audit [INF] from='mgr.14101 192.168.123.103:0/2835702550' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-09T14:16:43.558 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:43 vm03 bash[17524]: audit 2026-03-09T14:16:43.325302+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14101 192.168.123.103:0/2835702550' entity='mgr.x' 2026-03-09T14:16:43.558 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:43 vm03 bash[17524]: audit 2026-03-09T14:16:43.325302+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14101 192.168.123.103:0/2835702550' entity='mgr.x' 2026-03-09T14:16:43.558 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:43 vm03 bash[17524]: audit 2026-03-09T14:16:43.327577+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14101 192.168.123.103:0/2835702550' entity='mgr.x' 2026-03-09T14:16:43.558 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:43 vm03 bash[17524]: audit 2026-03-09T14:16:43.327577+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14101 192.168.123.103:0/2835702550' entity='mgr.x' 2026-03-09T14:16:44.575 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T14:16:44.576 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-09T14:16:44.576 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "fsid": "3346de4a-1bc2-11f1-95ae-3796c8433614", 2026-03-09T14:16:44.576 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "health": { 2026-03-09T14:16:44.576 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-09T14:16:44.576 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-09T14:16:44.576 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-09T14:16:44.576 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:16:44.576 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-09T14:16:44.576 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-09T14:16:44.576 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 0 2026-03-09T14:16:44.576 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-09T14:16:44.576 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-09T14:16:44.576 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "a" 2026-03-09T14:16:44.576 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-09T14:16:44.576 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum_age": 5, 2026-03-09T14:16:44.576 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-09T14:16:44.576 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T14:16:44.576 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-09T14:16:44.576 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-09T14:16:44.576 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:16:44.576 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-09T14:16:44.576 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T14:16:44.576 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-09T14:16:44.576 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-09T14:16:44.576 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-09T14:16:44.576 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-09T14:16:44.576 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-09T14:16:44.576 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-09T14:16:44.576 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:16:44.576 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-09T14:16:44.576 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-09T14:16:44.576 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-09T14:16:44.577 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-09T14:16:44.577 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-09T14:16:44.577 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-09T14:16:44.577 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-09T14:16:44.577 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-09T14:16:44.577 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-09T14:16:44.577 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:16:44.577 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-09T14:16:44.577 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T14:16:44.577 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "btime": "2026-03-09T14:16:38:208797+0000", 2026-03-09T14:16:44.577 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-09T14:16:44.577 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-09T14:16:44.577 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:16:44.577 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-09T14:16:44.577 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-09T14:16:44.577 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-09T14:16:44.577 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-09T14:16:44.577 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-09T14:16:44.577 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-09T14:16:44.577 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "restful" 2026-03-09T14:16:44.577 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-09T14:16:44.577 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T14:16:44.577 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:16:44.577 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-09T14:16:44.577 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T14:16:44.577 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "modified": "2026-03-09T14:16:38.210155+0000", 2026-03-09T14:16:44.577 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T14:16:44.577 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:16:44.577 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-09T14:16:44.577 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-09T14:16:44.577 INFO:teuthology.orchestra.run.vm03.stdout:mgr is available 2026-03-09T14:16:44.832 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T14:16:44.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout [global] 2026-03-09T14:16:44.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout fsid = 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:16:44.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-09T14:16:44.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.103:3300,v1:192.168.123.103:6789] 2026-03-09T14:16:44.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-09T14:16:44.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-09T14:16:44.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-09T14:16:44.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-09T14:16:44.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T14:16:44.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-09T14:16:44.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-09T14:16:44.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T14:16:44.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout [osd] 2026-03-09T14:16:44.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-09T14:16:44.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-09T14:16:44.833 INFO:teuthology.orchestra.run.vm03.stdout:Enabling cephadm module... 2026-03-09T14:16:45.557 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:45 vm03 bash[17796]: ignoring --setuser ceph since I am not root 2026-03-09T14:16:45.557 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:45 vm03 bash[17796]: ignoring --setgroup ceph since I am not root 2026-03-09T14:16:45.557 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:45 vm03 bash[17796]: debug 2026-03-09T14:16:45.407+0000 7f1026c8e140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T14:16:45.557 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:45 vm03 bash[17796]: debug 2026-03-09T14:16:45.447+0000 7f1026c8e140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T14:16:45.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:45 vm03 bash[17524]: cluster 2026-03-09T14:16:44.308704+0000 mon.a (mon.0) 28 : cluster [DBG] mgrmap e3: x(active, since 1.01233s) 2026-03-09T14:16:45.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:45 vm03 bash[17524]: cluster 2026-03-09T14:16:44.308704+0000 mon.a (mon.0) 28 : cluster [DBG] mgrmap e3: x(active, since 1.01233s) 2026-03-09T14:16:45.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:45 vm03 bash[17524]: audit 2026-03-09T14:16:44.534433+0000 mon.a (mon.0) 29 : audit [DBG] from='client.? 192.168.123.103:0/1837038701' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T14:16:45.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:45 vm03 bash[17524]: audit 2026-03-09T14:16:44.534433+0000 mon.a (mon.0) 29 : audit [DBG] from='client.? 192.168.123.103:0/1837038701' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T14:16:45.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:45 vm03 bash[17524]: audit 2026-03-09T14:16:44.789985+0000 mon.a (mon.0) 30 : audit [INF] from='client.? 192.168.123.103:0/3267228195' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-09T14:16:45.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:45 vm03 bash[17524]: audit 2026-03-09T14:16:44.789985+0000 mon.a (mon.0) 30 : audit [INF] from='client.? 192.168.123.103:0/3267228195' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-09T14:16:45.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:45 vm03 bash[17524]: audit 2026-03-09T14:16:45.092672+0000 mon.a (mon.0) 31 : audit [INF] from='client.? 192.168.123.103:0/3331532943' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-09T14:16:45.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:45 vm03 bash[17524]: audit 2026-03-09T14:16:45.092672+0000 mon.a (mon.0) 31 : audit [INF] from='client.? 192.168.123.103:0/3331532943' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-09T14:16:45.903 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:45 vm03 bash[17796]: debug 2026-03-09T14:16:45.567+0000 7f1026c8e140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T14:16:46.002 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-09T14:16:46.002 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 4, 2026-03-09T14:16:46.002 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-09T14:16:46.002 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "active_name": "x", 2026-03-09T14:16:46.002 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-09T14:16:46.002 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-09T14:16:46.002 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for the mgr to restart... 2026-03-09T14:16:46.002 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for mgr epoch 4... 2026-03-09T14:16:46.307 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:45 vm03 bash[17796]: debug 2026-03-09T14:16:45.895+0000 7f1026c8e140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T14:16:46.622 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:46 vm03 bash[17796]: debug 2026-03-09T14:16:46.399+0000 7f1026c8e140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T14:16:46.622 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:46 vm03 bash[17796]: debug 2026-03-09T14:16:46.487+0000 7f1026c8e140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T14:16:46.622 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:46 vm03 bash[17524]: audit 2026-03-09T14:16:45.310594+0000 mon.a (mon.0) 32 : audit [INF] from='client.? 192.168.123.103:0/3331532943' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-09T14:16:46.622 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:46 vm03 bash[17524]: audit 2026-03-09T14:16:45.310594+0000 mon.a (mon.0) 32 : audit [INF] from='client.? 192.168.123.103:0/3331532943' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-09T14:16:46.622 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:46 vm03 bash[17524]: cluster 2026-03-09T14:16:45.313754+0000 mon.a (mon.0) 33 : cluster [DBG] mgrmap e4: x(active, since 2s) 2026-03-09T14:16:46.622 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:46 vm03 bash[17524]: cluster 2026-03-09T14:16:45.313754+0000 mon.a (mon.0) 33 : cluster [DBG] mgrmap e4: x(active, since 2s) 2026-03-09T14:16:46.622 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:46 vm03 bash[17524]: audit 2026-03-09T14:16:45.938537+0000 mon.a (mon.0) 34 : audit [DBG] from='client.? 192.168.123.103:0/1601037514' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T14:16:46.622 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:46 vm03 bash[17524]: audit 2026-03-09T14:16:45.938537+0000 mon.a (mon.0) 34 : audit [DBG] from='client.? 192.168.123.103:0/1601037514' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T14:16:46.899 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:46 vm03 bash[17796]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T14:16:46.899 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:46 vm03 bash[17796]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T14:16:46.899 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:46 vm03 bash[17796]: from numpy import show_config as show_numpy_config 2026-03-09T14:16:46.899 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:46 vm03 bash[17796]: debug 2026-03-09T14:16:46.623+0000 7f1026c8e140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T14:16:46.899 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:46 vm03 bash[17796]: debug 2026-03-09T14:16:46.767+0000 7f1026c8e140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T14:16:46.899 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:46 vm03 bash[17796]: debug 2026-03-09T14:16:46.811+0000 7f1026c8e140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T14:16:46.899 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:46 vm03 bash[17796]: debug 2026-03-09T14:16:46.847+0000 7f1026c8e140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T14:16:47.307 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:46 vm03 bash[17796]: debug 2026-03-09T14:16:46.891+0000 7f1026c8e140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T14:16:47.307 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:46 vm03 bash[17796]: debug 2026-03-09T14:16:46.943+0000 7f1026c8e140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T14:16:47.664 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:47 vm03 bash[17796]: debug 2026-03-09T14:16:47.387+0000 7f1026c8e140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T14:16:47.664 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:47 vm03 bash[17796]: debug 2026-03-09T14:16:47.427+0000 7f1026c8e140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T14:16:47.664 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:47 vm03 bash[17796]: debug 2026-03-09T14:16:47.467+0000 7f1026c8e140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T14:16:47.664 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:47 vm03 bash[17796]: debug 2026-03-09T14:16:47.611+0000 7f1026c8e140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T14:16:47.986 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:47 vm03 bash[17796]: debug 2026-03-09T14:16:47.655+0000 7f1026c8e140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T14:16:47.986 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:47 vm03 bash[17796]: debug 2026-03-09T14:16:47.699+0000 7f1026c8e140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T14:16:47.986 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:47 vm03 bash[17796]: debug 2026-03-09T14:16:47.819+0000 7f1026c8e140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T14:16:48.245 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:47 vm03 bash[17796]: debug 2026-03-09T14:16:47.979+0000 7f1026c8e140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T14:16:48.245 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:48 vm03 bash[17796]: debug 2026-03-09T14:16:48.155+0000 7f1026c8e140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T14:16:48.245 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:48 vm03 bash[17796]: debug 2026-03-09T14:16:48.191+0000 7f1026c8e140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T14:16:48.557 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:48 vm03 bash[17796]: debug 2026-03-09T14:16:48.239+0000 7f1026c8e140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T14:16:48.557 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:48 vm03 bash[17796]: debug 2026-03-09T14:16:48.399+0000 7f1026c8e140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T14:16:49.057 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:48 vm03 bash[17796]: debug 2026-03-09T14:16:48.635+0000 7f1026c8e140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T14:16:49.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:48 vm03 bash[17524]: cluster 2026-03-09T14:16:48.640051+0000 mon.a (mon.0) 35 : cluster [INF] Active manager daemon x restarted 2026-03-09T14:16:49.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:48 vm03 bash[17524]: cluster 2026-03-09T14:16:48.640051+0000 mon.a (mon.0) 35 : cluster [INF] Active manager daemon x restarted 2026-03-09T14:16:49.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:48 vm03 bash[17524]: cluster 2026-03-09T14:16:48.640315+0000 mon.a (mon.0) 36 : cluster [INF] Activating manager daemon x 2026-03-09T14:16:49.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:48 vm03 bash[17524]: cluster 2026-03-09T14:16:48.640315+0000 mon.a (mon.0) 36 : cluster [INF] Activating manager daemon x 2026-03-09T14:16:49.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:48 vm03 bash[17524]: cluster 2026-03-09T14:16:48.645216+0000 mon.a (mon.0) 37 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-09T14:16:49.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:48 vm03 bash[17524]: cluster 2026-03-09T14:16:48.645216+0000 mon.a (mon.0) 37 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-09T14:16:49.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:48 vm03 bash[17524]: cluster 2026-03-09T14:16:48.645391+0000 mon.a (mon.0) 38 : cluster [DBG] mgrmap e5: x(active, starting, since 0.00518238s) 2026-03-09T14:16:49.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:48 vm03 bash[17524]: cluster 2026-03-09T14:16:48.645391+0000 mon.a (mon.0) 38 : cluster [DBG] mgrmap e5: x(active, starting, since 0.00518238s) 2026-03-09T14:16:49.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:48 vm03 bash[17524]: audit 2026-03-09T14:16:48.647831+0000 mon.a (mon.0) 39 : audit [DBG] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:16:49.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:48 vm03 bash[17524]: audit 2026-03-09T14:16:48.647831+0000 mon.a (mon.0) 39 : audit [DBG] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:16:49.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:48 vm03 bash[17524]: audit 2026-03-09T14:16:48.648704+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T14:16:49.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:48 vm03 bash[17524]: audit 2026-03-09T14:16:48.648704+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T14:16:49.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:48 vm03 bash[17524]: audit 2026-03-09T14:16:48.649122+0000 mon.a (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T14:16:49.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:48 vm03 bash[17524]: audit 2026-03-09T14:16:48.649122+0000 mon.a (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T14:16:49.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:48 vm03 bash[17524]: audit 2026-03-09T14:16:48.649277+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T14:16:49.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:48 vm03 bash[17524]: audit 2026-03-09T14:16:48.649277+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T14:16:49.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:48 vm03 bash[17524]: audit 2026-03-09T14:16:48.649394+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T14:16:49.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:48 vm03 bash[17524]: audit 2026-03-09T14:16:48.649394+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T14:16:49.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:48 vm03 bash[17524]: cluster 2026-03-09T14:16:48.655143+0000 mon.a (mon.0) 44 : cluster [INF] Manager daemon x is now available 2026-03-09T14:16:49.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:48 vm03 bash[17524]: cluster 2026-03-09T14:16:48.655143+0000 mon.a (mon.0) 44 : cluster [INF] Manager daemon x is now available 2026-03-09T14:16:49.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:48 vm03 bash[17524]: audit 2026-03-09T14:16:48.663186+0000 mon.a (mon.0) 45 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:16:49.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:48 vm03 bash[17524]: audit 2026-03-09T14:16:48.663186+0000 mon.a (mon.0) 45 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:16:49.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:48 vm03 bash[17524]: audit 2026-03-09T14:16:48.667384+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:16:49.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:48 vm03 bash[17524]: audit 2026-03-09T14:16:48.667384+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:16:49.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:48 vm03 bash[17524]: audit 2026-03-09T14:16:48.682063+0000 mon.a (mon.0) 47 : audit [DBG] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:16:49.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:48 vm03 bash[17524]: audit 2026-03-09T14:16:48.682063+0000 mon.a (mon.0) 47 : audit [DBG] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:16:49.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:48 vm03 bash[17524]: audit 2026-03-09T14:16:48.683570+0000 mon.a (mon.0) 48 : audit [DBG] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:16:49.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:48 vm03 bash[17524]: audit 2026-03-09T14:16:48.683570+0000 mon.a (mon.0) 48 : audit [DBG] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:16:49.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:48 vm03 bash[17524]: audit 2026-03-09T14:16:48.685800+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:16:49.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:48 vm03 bash[17524]: audit 2026-03-09T14:16:48.685800+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:16:49.731 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-09T14:16:49.732 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 6, 2026-03-09T14:16:49.732 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-09T14:16:49.732 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-09T14:16:49.732 INFO:teuthology.orchestra.run.vm03.stdout:mgr epoch 4 is available 2026-03-09T14:16:49.732 INFO:teuthology.orchestra.run.vm03.stdout:Setting orchestrator backend to cephadm... 2026-03-09T14:16:50.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:49 vm03 bash[17524]: cephadm 2026-03-09T14:16:48.660325+0000 mgr.x (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-09T14:16:50.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:49 vm03 bash[17524]: cephadm 2026-03-09T14:16:48.660325+0000 mgr.x (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-09T14:16:50.058 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:49 vm03 bash[17524]: audit 2026-03-09T14:16:48.693408+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-09T14:16:50.058 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:49 vm03 bash[17524]: audit 2026-03-09T14:16:48.693408+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-09T14:16:50.058 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:49 vm03 bash[17524]: audit 2026-03-09T14:16:49.193147+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:16:50.058 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:49 vm03 bash[17524]: audit 2026-03-09T14:16:49.193147+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:16:50.058 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:49 vm03 bash[17524]: audit 2026-03-09T14:16:49.259784+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:16:50.058 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:49 vm03 bash[17524]: audit 2026-03-09T14:16:49.259784+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:16:50.058 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:49 vm03 bash[17524]: cluster 2026-03-09T14:16:49.651824+0000 mon.a (mon.0) 53 : cluster [DBG] mgrmap e6: x(active, since 1.01162s) 2026-03-09T14:16:50.058 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:49 vm03 bash[17524]: cluster 2026-03-09T14:16:49.651824+0000 mon.a (mon.0) 53 : cluster [DBG] mgrmap e6: x(active, since 1.01162s) 2026-03-09T14:16:50.471 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout value unchanged 2026-03-09T14:16:50.471 INFO:teuthology.orchestra.run.vm03.stdout:Generating ssh key... 2026-03-09T14:16:51.017 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:50 vm03 bash[17796]: Generating public/private ed25519 key pair. 2026-03-09T14:16:51.017 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:50 vm03 bash[17796]: Your identification has been saved in /tmp/tmp095orsml/key 2026-03-09T14:16:51.017 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:50 vm03 bash[17796]: Your public key has been saved in /tmp/tmp095orsml/key.pub 2026-03-09T14:16:51.017 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:50 vm03 bash[17796]: The key fingerprint is: 2026-03-09T14:16:51.017 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:50 vm03 bash[17796]: SHA256:GD9MTbvC4baVQDYPIjUOjGepzTTCpGaIPoyDoc5FPrc ceph-3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:16:51.017 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:50 vm03 bash[17796]: The key's randomart image is: 2026-03-09T14:16:51.017 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:50 vm03 bash[17796]: +--[ED25519 256]--+ 2026-03-09T14:16:51.017 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:50 vm03 bash[17796]: | o.ooo+ = . | 2026-03-09T14:16:51.017 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:50 vm03 bash[17796]: |o.+ B+ = * . | 2026-03-09T14:16:51.017 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:50 vm03 bash[17796]: |=o X .o + + | 2026-03-09T14:16:51.017 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:50 vm03 bash[17796]: |O.+ o O o o | 2026-03-09T14:16:51.018 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:50 vm03 bash[17796]: |=+ + .. S + | 2026-03-09T14:16:51.018 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:50 vm03 bash[17796]: |o.o o .. = | 2026-03-09T14:16:51.018 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:50 vm03 bash[17796]: | o E . | 2026-03-09T14:16:51.018 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:50 vm03 bash[17796]: | | 2026-03-09T14:16:51.018 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:50 vm03 bash[17796]: | | 2026-03-09T14:16:51.018 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:50 vm03 bash[17796]: +----[SHA256]-----+ 2026-03-09T14:16:51.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBmenO0JWqbny63ax9fuiFIvU3mBZc+cQ8xwn8yAnGAJ ceph-3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:16:51.047 INFO:teuthology.orchestra.run.vm03.stdout:Wrote public SSH key to /home/ubuntu/cephtest/ceph.pub 2026-03-09T14:16:51.047 INFO:teuthology.orchestra.run.vm03.stdout:Adding key to root@localhost authorized_keys... 2026-03-09T14:16:51.047 INFO:teuthology.orchestra.run.vm03.stdout:Adding host vm03... 2026-03-09T14:16:51.291 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:51 vm03 bash[17524]: audit 2026-03-09T14:16:49.652673+0000 mgr.x (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T14:16:51.291 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:51 vm03 bash[17524]: audit 2026-03-09T14:16:49.652673+0000 mgr.x (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T14:16:51.291 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:51 vm03 bash[17524]: audit 2026-03-09T14:16:49.657130+0000 mgr.x (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T14:16:51.291 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:51 vm03 bash[17524]: audit 2026-03-09T14:16:49.657130+0000 mgr.x (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T14:16:51.291 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:51 vm03 bash[17524]: audit 2026-03-09T14:16:50.032145+0000 mgr.x (mgr.14118) 4 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:16:51.291 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:51 vm03 bash[17524]: audit 2026-03-09T14:16:50.032145+0000 mgr.x (mgr.14118) 4 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:16:51.291 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:51 vm03 bash[17524]: audit 2026-03-09T14:16:50.035978+0000 mon.a (mon.0) 54 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:16:51.291 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:51 vm03 bash[17524]: audit 2026-03-09T14:16:50.035978+0000 mon.a (mon.0) 54 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:16:51.291 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:51 vm03 bash[17524]: audit 2026-03-09T14:16:50.042394+0000 mon.a (mon.0) 55 : audit [DBG] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:16:51.291 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:51 vm03 bash[17524]: audit 2026-03-09T14:16:50.042394+0000 mon.a (mon.0) 55 : audit [DBG] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:16:51.291 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:51 vm03 bash[17524]: cephadm 2026-03-09T14:16:50.060122+0000 mgr.x (mgr.14118) 5 : cephadm [INF] [09/Mar/2026:14:16:50] ENGINE Bus STARTING 2026-03-09T14:16:51.291 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:51 vm03 bash[17524]: cephadm 2026-03-09T14:16:50.060122+0000 mgr.x (mgr.14118) 5 : cephadm [INF] [09/Mar/2026:14:16:50] ENGINE Bus STARTING 2026-03-09T14:16:51.291 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:51 vm03 bash[17524]: cephadm 2026-03-09T14:16:50.171951+0000 mgr.x (mgr.14118) 6 : cephadm [INF] [09/Mar/2026:14:16:50] ENGINE Serving on https://192.168.123.103:7150 2026-03-09T14:16:51.291 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:51 vm03 bash[17524]: cephadm 2026-03-09T14:16:50.171951+0000 mgr.x (mgr.14118) 6 : cephadm [INF] [09/Mar/2026:14:16:50] ENGINE Serving on https://192.168.123.103:7150 2026-03-09T14:16:51.291 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:51 vm03 bash[17524]: cephadm 2026-03-09T14:16:50.172410+0000 mgr.x (mgr.14118) 7 : cephadm [INF] [09/Mar/2026:14:16:50] ENGINE Client ('192.168.123.103', 58170) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T14:16:51.291 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:51 vm03 bash[17524]: cephadm 2026-03-09T14:16:50.172410+0000 mgr.x (mgr.14118) 7 : cephadm [INF] [09/Mar/2026:14:16:50] ENGINE Client ('192.168.123.103', 58170) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T14:16:51.291 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:51 vm03 bash[17524]: cephadm 2026-03-09T14:16:50.272937+0000 mgr.x (mgr.14118) 8 : cephadm [INF] [09/Mar/2026:14:16:50] ENGINE Serving on http://192.168.123.103:8765 2026-03-09T14:16:51.291 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:51 vm03 bash[17524]: cephadm 2026-03-09T14:16:50.272937+0000 mgr.x (mgr.14118) 8 : cephadm [INF] [09/Mar/2026:14:16:50] ENGINE Serving on http://192.168.123.103:8765 2026-03-09T14:16:51.291 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:51 vm03 bash[17524]: cephadm 2026-03-09T14:16:50.273227+0000 mgr.x (mgr.14118) 9 : cephadm [INF] [09/Mar/2026:14:16:50] ENGINE Bus STARTED 2026-03-09T14:16:51.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:51 vm03 bash[17524]: cephadm 2026-03-09T14:16:50.273227+0000 mgr.x (mgr.14118) 9 : cephadm [INF] [09/Mar/2026:14:16:50] ENGINE Bus STARTED 2026-03-09T14:16:51.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:51 vm03 bash[17524]: audit 2026-03-09T14:16:50.273709+0000 mon.a (mon.0) 56 : audit [DBG] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:16:51.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:51 vm03 bash[17524]: audit 2026-03-09T14:16:50.273709+0000 mon.a (mon.0) 56 : audit [DBG] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:16:51.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:51 vm03 bash[17524]: audit 2026-03-09T14:16:50.722898+0000 mon.a (mon.0) 57 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:16:51.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:51 vm03 bash[17524]: audit 2026-03-09T14:16:50.722898+0000 mon.a (mon.0) 57 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:16:51.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:51 vm03 bash[17524]: audit 2026-03-09T14:16:50.724822+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:16:51.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:51 vm03 bash[17524]: audit 2026-03-09T14:16:50.724822+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:16:52.186 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:52 vm03 bash[17524]: audit 2026-03-09T14:16:50.413675+0000 mgr.x (mgr.14118) 10 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:16:52.186 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:52 vm03 bash[17524]: audit 2026-03-09T14:16:50.413675+0000 mgr.x (mgr.14118) 10 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:16:52.186 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:52 vm03 bash[17524]: audit 2026-03-09T14:16:50.699736+0000 mgr.x (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:16:52.186 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:52 vm03 bash[17524]: audit 2026-03-09T14:16:50.699736+0000 mgr.x (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:16:52.186 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:52 vm03 bash[17524]: cephadm 2026-03-09T14:16:50.699976+0000 mgr.x (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-09T14:16:52.186 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:52 vm03 bash[17524]: cephadm 2026-03-09T14:16:50.699976+0000 mgr.x (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-09T14:16:52.186 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:52 vm03 bash[17524]: audit 2026-03-09T14:16:51.000306+0000 mgr.x (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:16:52.186 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:52 vm03 bash[17524]: audit 2026-03-09T14:16:51.000306+0000 mgr.x (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:16:52.186 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:52 vm03 bash[17524]: cluster 2026-03-09T14:16:51.053079+0000 mon.a (mon.0) 59 : cluster [DBG] mgrmap e7: x(active, since 2s) 2026-03-09T14:16:52.186 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:52 vm03 bash[17524]: cluster 2026-03-09T14:16:51.053079+0000 mon.a (mon.0) 59 : cluster [DBG] mgrmap e7: x(active, since 2s) 2026-03-09T14:16:52.186 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:52 vm03 bash[17524]: audit 2026-03-09T14:16:51.279729+0000 mgr.x (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm03", "addr": "192.168.123.103", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:16:52.186 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:52 vm03 bash[17524]: audit 2026-03-09T14:16:51.279729+0000 mgr.x (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm03", "addr": "192.168.123.103", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:16:53.265 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout Added host 'vm03' with addr '192.168.123.103' 2026-03-09T14:16:53.265 INFO:teuthology.orchestra.run.vm03.stdout:Deploying unmanaged mon service... 2026-03-09T14:16:53.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:53 vm03 bash[17524]: cephadm 2026-03-09T14:16:51.850777+0000 mgr.x (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm03 2026-03-09T14:16:53.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:53 vm03 bash[17524]: cephadm 2026-03-09T14:16:51.850777+0000 mgr.x (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm03 2026-03-09T14:16:53.617 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout Scheduled mon update... 2026-03-09T14:16:53.617 INFO:teuthology.orchestra.run.vm03.stdout:Deploying unmanaged mgr service... 2026-03-09T14:16:53.906 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout Scheduled mgr update... 2026-03-09T14:16:54.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:54 vm03 bash[17524]: audit 2026-03-09T14:16:53.208511+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:16:54.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:54 vm03 bash[17524]: audit 2026-03-09T14:16:53.208511+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:16:54.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:54 vm03 bash[17524]: cephadm 2026-03-09T14:16:53.208964+0000 mgr.x (mgr.14118) 16 : cephadm [INF] Added host vm03 2026-03-09T14:16:54.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:54 vm03 bash[17524]: cephadm 2026-03-09T14:16:53.208964+0000 mgr.x (mgr.14118) 16 : cephadm [INF] Added host vm03 2026-03-09T14:16:54.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:54 vm03 bash[17524]: audit 2026-03-09T14:16:53.212192+0000 mon.a (mon.0) 61 : audit [DBG] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:16:54.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:54 vm03 bash[17524]: audit 2026-03-09T14:16:53.212192+0000 mon.a (mon.0) 61 : audit [DBG] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:16:54.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:54 vm03 bash[17524]: audit 2026-03-09T14:16:53.574770+0000 mon.a (mon.0) 62 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:16:54.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:54 vm03 bash[17524]: audit 2026-03-09T14:16:53.574770+0000 mon.a (mon.0) 62 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:16:54.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:54 vm03 bash[17524]: audit 2026-03-09T14:16:53.861980+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:16:54.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:54 vm03 bash[17524]: audit 2026-03-09T14:16:53.861980+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:16:54.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:54 vm03 bash[17524]: audit 2026-03-09T14:16:54.136352+0000 mon.a (mon.0) 64 : audit [INF] from='client.? 192.168.123.103:0/2564945460' entity='client.admin' 2026-03-09T14:16:54.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:54 vm03 bash[17524]: audit 2026-03-09T14:16:54.136352+0000 mon.a (mon.0) 64 : audit [INF] from='client.? 192.168.123.103:0/2564945460' entity='client.admin' 2026-03-09T14:16:54.463 INFO:teuthology.orchestra.run.vm03.stdout:Enabling the dashboard module... 2026-03-09T14:16:55.230 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:55 vm03 bash[17524]: audit 2026-03-09T14:16:53.570818+0000 mgr.x (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:16:55.230 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:55 vm03 bash[17524]: audit 2026-03-09T14:16:53.570818+0000 mgr.x (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:16:55.230 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:55 vm03 bash[17524]: cephadm 2026-03-09T14:16:53.571754+0000 mgr.x (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-09T14:16:55.230 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:55 vm03 bash[17524]: cephadm 2026-03-09T14:16:53.571754+0000 mgr.x (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-09T14:16:55.230 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:55 vm03 bash[17524]: audit 2026-03-09T14:16:53.858061+0000 mgr.x (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:16:55.230 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:55 vm03 bash[17524]: audit 2026-03-09T14:16:53.858061+0000 mgr.x (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:16:55.230 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:55 vm03 bash[17524]: cephadm 2026-03-09T14:16:53.858814+0000 mgr.x (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-09T14:16:55.230 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:55 vm03 bash[17524]: cephadm 2026-03-09T14:16:53.858814+0000 mgr.x (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-09T14:16:55.230 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:55 vm03 bash[17524]: audit 2026-03-09T14:16:54.408466+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.103:0/4028812733' entity='client.admin' 2026-03-09T14:16:55.230 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:55 vm03 bash[17524]: audit 2026-03-09T14:16:54.408466+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.103:0/4028812733' entity='client.admin' 2026-03-09T14:16:55.230 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:55 vm03 bash[17524]: audit 2026-03-09T14:16:54.748323+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.103:0/754411936' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-09T14:16:55.230 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:55 vm03 bash[17524]: audit 2026-03-09T14:16:54.748323+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.103:0/754411936' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-09T14:16:55.230 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:55 vm03 bash[17524]: audit 2026-03-09T14:16:55.050189+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:16:55.230 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:55 vm03 bash[17524]: audit 2026-03-09T14:16:55.050189+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:16:55.808 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:55 vm03 bash[17796]: ignoring --setuser ceph since I am not root 2026-03-09T14:16:55.808 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:55 vm03 bash[17796]: ignoring --setgroup ceph since I am not root 2026-03-09T14:16:55.808 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:55 vm03 bash[17796]: debug 2026-03-09T14:16:55.599+0000 7fbc3725d140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T14:16:55.808 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:55 vm03 bash[17796]: debug 2026-03-09T14:16:55.659+0000 7fbc3725d140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T14:16:55.970 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-09T14:16:55.970 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 8, 2026-03-09T14:16:55.970 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-09T14:16:55.970 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "active_name": "x", 2026-03-09T14:16:55.970 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-09T14:16:55.970 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-09T14:16:55.970 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for the mgr to restart... 2026-03-09T14:16:55.970 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for mgr epoch 8... 2026-03-09T14:16:56.195 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:55 vm03 bash[17796]: debug 2026-03-09T14:16:55.835+0000 7fbc3725d140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T14:16:56.556 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:56 vm03 bash[17796]: debug 2026-03-09T14:16:56.187+0000 7fbc3725d140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T14:16:56.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:56 vm03 bash[17524]: audit 2026-03-09T14:16:55.385143+0000 mon.a (mon.0) 68 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:16:56.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:56 vm03 bash[17524]: audit 2026-03-09T14:16:55.385143+0000 mon.a (mon.0) 68 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:16:56.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:56 vm03 bash[17524]: audit 2026-03-09T14:16:55.409689+0000 mon.a (mon.0) 69 : audit [INF] from='client.? 192.168.123.103:0/754411936' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-09T14:16:56.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:56 vm03 bash[17524]: audit 2026-03-09T14:16:55.409689+0000 mon.a (mon.0) 69 : audit [INF] from='client.? 192.168.123.103:0/754411936' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-09T14:16:56.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:56 vm03 bash[17524]: cluster 2026-03-09T14:16:55.412099+0000 mon.a (mon.0) 70 : cluster [DBG] mgrmap e8: x(active, since 6s) 2026-03-09T14:16:56.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:56 vm03 bash[17524]: cluster 2026-03-09T14:16:55.412099+0000 mon.a (mon.0) 70 : cluster [DBG] mgrmap e8: x(active, since 6s) 2026-03-09T14:16:56.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:56 vm03 bash[17524]: audit 2026-03-09T14:16:55.910446+0000 mon.a (mon.0) 71 : audit [DBG] from='client.? 192.168.123.103:0/2250228207' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T14:16:56.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:56 vm03 bash[17524]: audit 2026-03-09T14:16:55.910446+0000 mon.a (mon.0) 71 : audit [DBG] from='client.? 192.168.123.103:0/2250228207' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T14:16:57.009 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:56 vm03 bash[17796]: debug 2026-03-09T14:16:56.655+0000 7fbc3725d140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T14:16:57.009 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:56 vm03 bash[17796]: debug 2026-03-09T14:16:56.739+0000 7fbc3725d140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T14:16:57.009 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:56 vm03 bash[17796]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T14:16:57.009 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:56 vm03 bash[17796]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T14:16:57.009 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:56 vm03 bash[17796]: from numpy import show_config as show_numpy_config 2026-03-09T14:16:57.009 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:56 vm03 bash[17796]: debug 2026-03-09T14:16:56.867+0000 7fbc3725d140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T14:16:57.306 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:57 vm03 bash[17796]: debug 2026-03-09T14:16:57.003+0000 7fbc3725d140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T14:16:57.307 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:57 vm03 bash[17796]: debug 2026-03-09T14:16:57.039+0000 7fbc3725d140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T14:16:57.307 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:57 vm03 bash[17796]: debug 2026-03-09T14:16:57.083+0000 7fbc3725d140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T14:16:57.307 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:57 vm03 bash[17796]: debug 2026-03-09T14:16:57.127+0000 7fbc3725d140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T14:16:57.307 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:57 vm03 bash[17796]: debug 2026-03-09T14:16:57.183+0000 7fbc3725d140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T14:16:57.904 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:57 vm03 bash[17796]: debug 2026-03-09T14:16:57.631+0000 7fbc3725d140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T14:16:57.904 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:57 vm03 bash[17796]: debug 2026-03-09T14:16:57.671+0000 7fbc3725d140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T14:16:57.904 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:57 vm03 bash[17796]: debug 2026-03-09T14:16:57.711+0000 7fbc3725d140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T14:16:57.904 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:57 vm03 bash[17796]: debug 2026-03-09T14:16:57.855+0000 7fbc3725d140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T14:16:58.215 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:57 vm03 bash[17796]: debug 2026-03-09T14:16:57.895+0000 7fbc3725d140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T14:16:58.215 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:57 vm03 bash[17796]: debug 2026-03-09T14:16:57.935+0000 7fbc3725d140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T14:16:58.215 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:58 vm03 bash[17796]: debug 2026-03-09T14:16:58.047+0000 7fbc3725d140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T14:16:58.491 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:58 vm03 bash[17796]: debug 2026-03-09T14:16:58.207+0000 7fbc3725d140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T14:16:58.491 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:58 vm03 bash[17796]: debug 2026-03-09T14:16:58.395+0000 7fbc3725d140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T14:16:58.491 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:58 vm03 bash[17796]: debug 2026-03-09T14:16:58.435+0000 7fbc3725d140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T14:16:58.491 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:58 vm03 bash[17796]: debug 2026-03-09T14:16:58.483+0000 7fbc3725d140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T14:16:58.806 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:58 vm03 bash[17796]: debug 2026-03-09T14:16:58.643+0000 7fbc3725d140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T14:16:59.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:58 vm03 bash[17524]: cluster 2026-03-09T14:16:58.892737+0000 mon.a (mon.0) 72 : cluster [INF] Active manager daemon x restarted 2026-03-09T14:16:59.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:58 vm03 bash[17524]: cluster 2026-03-09T14:16:58.892737+0000 mon.a (mon.0) 72 : cluster [INF] Active manager daemon x restarted 2026-03-09T14:16:59.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:58 vm03 bash[17524]: cluster 2026-03-09T14:16:58.893165+0000 mon.a (mon.0) 73 : cluster [INF] Activating manager daemon x 2026-03-09T14:16:59.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:58 vm03 bash[17524]: cluster 2026-03-09T14:16:58.893165+0000 mon.a (mon.0) 73 : cluster [INF] Activating manager daemon x 2026-03-09T14:16:59.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:58 vm03 bash[17524]: cluster 2026-03-09T14:16:58.898531+0000 mon.a (mon.0) 74 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-09T14:16:59.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:58 vm03 bash[17524]: cluster 2026-03-09T14:16:58.898531+0000 mon.a (mon.0) 74 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-09T14:16:59.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:58 vm03 bash[17524]: cluster 2026-03-09T14:16:58.898752+0000 mon.a (mon.0) 75 : cluster [DBG] mgrmap e9: x(active, starting, since 0.00568639s) 2026-03-09T14:16:59.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:58 vm03 bash[17524]: cluster 2026-03-09T14:16:58.898752+0000 mon.a (mon.0) 75 : cluster [DBG] mgrmap e9: x(active, starting, since 0.00568639s) 2026-03-09T14:16:59.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:58 vm03 bash[17524]: audit 2026-03-09T14:16:58.903902+0000 mon.a (mon.0) 76 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:16:59.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:58 vm03 bash[17524]: audit 2026-03-09T14:16:58.903902+0000 mon.a (mon.0) 76 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:16:59.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:58 vm03 bash[17524]: audit 2026-03-09T14:16:58.904900+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T14:16:59.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:58 vm03 bash[17524]: audit 2026-03-09T14:16:58.904900+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T14:16:59.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:58 vm03 bash[17524]: audit 2026-03-09T14:16:58.905958+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T14:16:59.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:58 vm03 bash[17524]: audit 2026-03-09T14:16:58.905958+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T14:16:59.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:58 vm03 bash[17524]: audit 2026-03-09T14:16:58.906377+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T14:16:59.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:58 vm03 bash[17524]: audit 2026-03-09T14:16:58.906377+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T14:16:59.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:58 vm03 bash[17524]: audit 2026-03-09T14:16:58.906791+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T14:16:59.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:58 vm03 bash[17524]: audit 2026-03-09T14:16:58.906791+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T14:16:59.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:58 vm03 bash[17524]: cluster 2026-03-09T14:16:58.913076+0000 mon.a (mon.0) 81 : cluster [INF] Manager daemon x is now available 2026-03-09T14:16:59.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:58 vm03 bash[17524]: cluster 2026-03-09T14:16:58.913076+0000 mon.a (mon.0) 81 : cluster [INF] Manager daemon x is now available 2026-03-09T14:16:59.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:58 vm03 bash[17524]: audit 2026-03-09T14:16:58.936530+0000 mon.a (mon.0) 82 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:16:59.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:58 vm03 bash[17524]: audit 2026-03-09T14:16:58.936530+0000 mon.a (mon.0) 82 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:16:59.307 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:16:58 vm03 bash[17796]: debug 2026-03-09T14:16:58.887+0000 7fbc3725d140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T14:16:59.949 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-09T14:16:59.949 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 10, 2026-03-09T14:16:59.949 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-09T14:16:59.949 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-09T14:16:59.949 INFO:teuthology.orchestra.run.vm03.stdout:mgr epoch 8 is available 2026-03-09T14:16:59.949 INFO:teuthology.orchestra.run.vm03.stdout:Generating a dashboard self-signed certificate... 2026-03-09T14:17:00.248 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:59 vm03 bash[17524]: audit 2026-03-09T14:16:58.965999+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:17:00.249 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:59 vm03 bash[17524]: audit 2026-03-09T14:16:58.965999+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:17:00.249 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:59 vm03 bash[17524]: audit 2026-03-09T14:16:58.975877+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-09T14:17:00.249 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:59 vm03 bash[17524]: audit 2026-03-09T14:16:58.975877+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-09T14:17:00.249 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:59 vm03 bash[17524]: cluster 2026-03-09T14:16:59.902012+0000 mon.a (mon.0) 85 : cluster [DBG] mgrmap e10: x(active, since 1.00895s) 2026-03-09T14:17:00.249 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:16:59 vm03 bash[17524]: cluster 2026-03-09T14:16:59.902012+0000 mon.a (mon.0) 85 : cluster [DBG] mgrmap e10: x(active, since 1.00895s) 2026-03-09T14:17:00.280 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout Self-signed certificate created 2026-03-09T14:17:00.281 INFO:teuthology.orchestra.run.vm03.stdout:Creating initial admin user... 2026-03-09T14:17:00.748 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout {"username": "admin", "password": "$2b$12$e.P9fZQeeS6OEzwoEkwbueY6B.Tg5aQGjJ1pjcvIGBR7SYlUgT5wC", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1773065820, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": true} 2026-03-09T14:17:00.748 INFO:teuthology.orchestra.run.vm03.stdout:Fetching dashboard port number... 2026-03-09T14:17:01.028 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 8443 2026-03-09T14:17:01.028 INFO:teuthology.orchestra.run.vm03.stdout:firewalld does not appear to be present 2026-03-09T14:17:01.028 INFO:teuthology.orchestra.run.vm03.stdout:Not possible to open ports <[8443]>. firewalld.service is not available 2026-03-09T14:17:01.028 INFO:teuthology.orchestra.run.vm03.stdout:Ceph Dashboard is now available at: 2026-03-09T14:17:01.029 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:17:01.029 INFO:teuthology.orchestra.run.vm03.stdout: URL: https://vm03.local:8443/ 2026-03-09T14:17:01.029 INFO:teuthology.orchestra.run.vm03.stdout: User: admin 2026-03-09T14:17:01.029 INFO:teuthology.orchestra.run.vm03.stdout: Password: kd01n72jdx 2026-03-09T14:17:01.029 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:17:01.029 INFO:teuthology.orchestra.run.vm03.stdout:Saving cluster configuration to /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config directory 2026-03-09T14:17:01.360 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stderr set mgr/dashboard/cluster/status 2026-03-09T14:17:01.360 INFO:teuthology.orchestra.run.vm03.stdout:You can access the Ceph CLI as following in case of multi-cluster or non-default config: 2026-03-09T14:17:01.360 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:17:01.360 INFO:teuthology.orchestra.run.vm03.stdout: sudo /home/ubuntu/cephtest/cephadm shell --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring 2026-03-09T14:17:01.360 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:17:01.360 INFO:teuthology.orchestra.run.vm03.stdout:Or, if you are only running a single cluster on this host: 2026-03-09T14:17:01.360 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:17:01.360 INFO:teuthology.orchestra.run.vm03.stdout: sudo /home/ubuntu/cephtest/cephadm shell 2026-03-09T14:17:01.360 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:17:01.360 INFO:teuthology.orchestra.run.vm03.stdout:Please consider enabling telemetry to help improve Ceph: 2026-03-09T14:17:01.360 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:17:01.360 INFO:teuthology.orchestra.run.vm03.stdout: ceph telemetry on 2026-03-09T14:17:01.360 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:17:01.360 INFO:teuthology.orchestra.run.vm03.stdout:For more information see: 2026-03-09T14:17:01.360 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:17:01.360 INFO:teuthology.orchestra.run.vm03.stdout: https://docs.ceph.com/en/latest/mgr/telemetry/ 2026-03-09T14:17:01.360 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:17:01.360 INFO:teuthology.orchestra.run.vm03.stdout:Bootstrap complete. 2026-03-09T14:17:01.383 INFO:tasks.cephadm:Fetching config... 2026-03-09T14:17:01.384 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T14:17:01.384 DEBUG:teuthology.orchestra.run.vm03:> dd if=/etc/ceph/ceph.conf of=/dev/stdout 2026-03-09T14:17:01.387 INFO:tasks.cephadm:Fetching client.admin keyring... 2026-03-09T14:17:01.387 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T14:17:01.387 DEBUG:teuthology.orchestra.run.vm03:> dd if=/etc/ceph/ceph.client.admin.keyring of=/dev/stdout 2026-03-09T14:17:01.431 INFO:tasks.cephadm:Fetching mon keyring... 2026-03-09T14:17:01.431 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T14:17:01.431 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/keyring of=/dev/stdout 2026-03-09T14:17:01.482 INFO:tasks.cephadm:Fetching pub ssh key... 2026-03-09T14:17:01.482 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T14:17:01.482 DEBUG:teuthology.orchestra.run.vm03:> dd if=/home/ubuntu/cephtest/ceph.pub of=/dev/stdout 2026-03-09T14:17:01.531 INFO:tasks.cephadm:Installing pub ssh key for root users... 2026-03-09T14:17:01.531 DEBUG:teuthology.orchestra.run.vm03:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBmenO0JWqbny63ax9fuiFIvU3mBZc+cQ8xwn8yAnGAJ ceph-3346de4a-1bc2-11f1-95ae-3796c8433614' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-09T14:17:01.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:01 vm03 bash[17524]: cephadm 2026-03-09T14:16:59.623059+0000 mgr.x (mgr.14150) 1 : cephadm [INF] [09/Mar/2026:14:16:59] ENGINE Bus STARTING 2026-03-09T14:17:01.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:01 vm03 bash[17524]: cephadm 2026-03-09T14:16:59.623059+0000 mgr.x (mgr.14150) 1 : cephadm [INF] [09/Mar/2026:14:16:59] ENGINE Bus STARTING 2026-03-09T14:17:01.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:01 vm03 bash[17524]: cephadm 2026-03-09T14:16:59.724715+0000 mgr.x (mgr.14150) 2 : cephadm [INF] [09/Mar/2026:14:16:59] ENGINE Serving on http://192.168.123.103:8765 2026-03-09T14:17:01.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:01 vm03 bash[17524]: cephadm 2026-03-09T14:16:59.724715+0000 mgr.x (mgr.14150) 2 : cephadm [INF] [09/Mar/2026:14:16:59] ENGINE Serving on http://192.168.123.103:8765 2026-03-09T14:17:01.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:01 vm03 bash[17524]: cephadm 2026-03-09T14:16:59.834905+0000 mgr.x (mgr.14150) 3 : cephadm [INF] [09/Mar/2026:14:16:59] ENGINE Serving on https://192.168.123.103:7150 2026-03-09T14:17:01.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:01 vm03 bash[17524]: cephadm 2026-03-09T14:16:59.834905+0000 mgr.x (mgr.14150) 3 : cephadm [INF] [09/Mar/2026:14:16:59] ENGINE Serving on https://192.168.123.103:7150 2026-03-09T14:17:01.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:01 vm03 bash[17524]: cephadm 2026-03-09T14:16:59.834994+0000 mgr.x (mgr.14150) 4 : cephadm [INF] [09/Mar/2026:14:16:59] ENGINE Bus STARTED 2026-03-09T14:17:01.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:01 vm03 bash[17524]: cephadm 2026-03-09T14:16:59.834994+0000 mgr.x (mgr.14150) 4 : cephadm [INF] [09/Mar/2026:14:16:59] ENGINE Bus STARTED 2026-03-09T14:17:01.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:01 vm03 bash[17524]: cephadm 2026-03-09T14:16:59.835554+0000 mgr.x (mgr.14150) 5 : cephadm [INF] [09/Mar/2026:14:16:59] ENGINE Client ('192.168.123.103', 53372) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T14:17:01.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:01 vm03 bash[17524]: cephadm 2026-03-09T14:16:59.835554+0000 mgr.x (mgr.14150) 5 : cephadm [INF] [09/Mar/2026:14:16:59] ENGINE Client ('192.168.123.103', 53372) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T14:17:01.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:01 vm03 bash[17524]: audit 2026-03-09T14:16:59.903221+0000 mgr.x (mgr.14150) 6 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T14:17:01.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:01 vm03 bash[17524]: audit 2026-03-09T14:16:59.903221+0000 mgr.x (mgr.14150) 6 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T14:17:01.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:01 vm03 bash[17524]: audit 2026-03-09T14:16:59.907650+0000 mgr.x (mgr.14150) 7 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T14:17:01.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:01 vm03 bash[17524]: audit 2026-03-09T14:16:59.907650+0000 mgr.x (mgr.14150) 7 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T14:17:01.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:01 vm03 bash[17524]: audit 2026-03-09T14:17:00.188002+0000 mgr.x (mgr.14150) 8 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:17:01.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:01 vm03 bash[17524]: audit 2026-03-09T14:17:00.188002+0000 mgr.x (mgr.14150) 8 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:17:01.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:01 vm03 bash[17524]: audit 2026-03-09T14:17:00.227959+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:01.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:01 vm03 bash[17524]: audit 2026-03-09T14:17:00.227959+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:01.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:01 vm03 bash[17524]: audit 2026-03-09T14:17:00.230419+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:01.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:01 vm03 bash[17524]: audit 2026-03-09T14:17:00.230419+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:01.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:01 vm03 bash[17524]: audit 2026-03-09T14:17:00.702287+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:01.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:01 vm03 bash[17524]: audit 2026-03-09T14:17:00.702287+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:01.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:01 vm03 bash[17524]: audit 2026-03-09T14:17:00.967806+0000 mon.a (mon.0) 89 : audit [DBG] from='client.? 192.168.123.103:0/693571450' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-09T14:17:01.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:01 vm03 bash[17524]: audit 2026-03-09T14:17:00.967806+0000 mon.a (mon.0) 89 : audit [DBG] from='client.? 192.168.123.103:0/693571450' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-09T14:17:01.571 INFO:teuthology.orchestra.run.vm03.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBmenO0JWqbny63ax9fuiFIvU3mBZc+cQ8xwn8yAnGAJ ceph-3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:17:01.577 DEBUG:teuthology.orchestra.run.vm04:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBmenO0JWqbny63ax9fuiFIvU3mBZc+cQ8xwn8yAnGAJ ceph-3346de4a-1bc2-11f1-95ae-3796c8433614' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-09T14:17:01.588 INFO:teuthology.orchestra.run.vm04.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBmenO0JWqbny63ax9fuiFIvU3mBZc+cQ8xwn8yAnGAJ ceph-3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:17:01.593 DEBUG:teuthology.orchestra.run.vm05:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBmenO0JWqbny63ax9fuiFIvU3mBZc+cQ8xwn8yAnGAJ ceph-3346de4a-1bc2-11f1-95ae-3796c8433614' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-09T14:17:01.603 INFO:teuthology.orchestra.run.vm05.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBmenO0JWqbny63ax9fuiFIvU3mBZc+cQ8xwn8yAnGAJ ceph-3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:17:01.607 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph config set mgr mgr/cephadm/allow_ptrace true 2026-03-09T14:17:02.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:02 vm03 bash[17524]: audit 2026-03-09T14:17:00.549855+0000 mgr.x (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:17:02.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:02 vm03 bash[17524]: audit 2026-03-09T14:17:00.549855+0000 mgr.x (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:17:02.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:02 vm03 bash[17524]: audit 2026-03-09T14:17:01.316344+0000 mon.a (mon.0) 90 : audit [INF] from='client.? 192.168.123.103:0/381537853' entity='client.admin' 2026-03-09T14:17:02.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:02 vm03 bash[17524]: audit 2026-03-09T14:17:01.316344+0000 mon.a (mon.0) 90 : audit [INF] from='client.? 192.168.123.103:0/381537853' entity='client.admin' 2026-03-09T14:17:02.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:02 vm03 bash[17524]: cluster 2026-03-09T14:17:01.706636+0000 mon.a (mon.0) 91 : cluster [DBG] mgrmap e11: x(active, since 2s) 2026-03-09T14:17:02.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:02 vm03 bash[17524]: cluster 2026-03-09T14:17:01.706636+0000 mon.a (mon.0) 91 : cluster [DBG] mgrmap e11: x(active, since 2s) 2026-03-09T14:17:04.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:04 vm03 bash[17524]: audit 2026-03-09T14:17:03.450854+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:04.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:04 vm03 bash[17524]: audit 2026-03-09T14:17:03.450854+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:04.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:04 vm03 bash[17524]: audit 2026-03-09T14:17:04.093514+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:04.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:04 vm03 bash[17524]: audit 2026-03-09T14:17:04.093514+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:05.306 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:17:05.906 INFO:tasks.cephadm:Distributing conf and client.admin keyring to all hosts + 0755 2026-03-09T14:17:05.906 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph orch client-keyring set client.admin '*' --mode 0755 2026-03-09T14:17:06.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:06 vm03 bash[17524]: cluster 2026-03-09T14:17:05.457328+0000 mon.a (mon.0) 94 : cluster [DBG] mgrmap e12: x(active, since 6s) 2026-03-09T14:17:06.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:06 vm03 bash[17524]: cluster 2026-03-09T14:17:05.457328+0000 mon.a (mon.0) 94 : cluster [DBG] mgrmap e12: x(active, since 6s) 2026-03-09T14:17:06.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:06 vm03 bash[17524]: audit 2026-03-09T14:17:05.581214+0000 mon.a (mon.0) 95 : audit [INF] from='client.? 192.168.123.103:0/2441871205' entity='client.admin' 2026-03-09T14:17:06.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:06 vm03 bash[17524]: audit 2026-03-09T14:17:05.581214+0000 mon.a (mon.0) 95 : audit [INF] from='client.? 192.168.123.103:0/2441871205' entity='client.admin' 2026-03-09T14:17:10.399 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:17:10.743 INFO:tasks.cephadm:Writing (initial) conf and keyring to vm04 2026-03-09T14:17:10.743 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T14:17:10.743 DEBUG:teuthology.orchestra.run.vm04:> dd of=/etc/ceph/ceph.conf 2026-03-09T14:17:10.746 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T14:17:10.746 DEBUG:teuthology.orchestra.run.vm04:> dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:17:10.793 INFO:tasks.cephadm:Adding host vm04 to orchestrator... 2026-03-09T14:17:10.793 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph orch host add vm04 2026-03-09T14:17:11.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:10 vm03 bash[17524]: audit 2026-03-09T14:17:09.883300+0000 mon.a (mon.0) 96 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:11.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:10 vm03 bash[17524]: audit 2026-03-09T14:17:09.883300+0000 mon.a (mon.0) 96 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:11.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:10 vm03 bash[17524]: audit 2026-03-09T14:17:09.886383+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:11.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:10 vm03 bash[17524]: audit 2026-03-09T14:17:09.886383+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:11.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:10 vm03 bash[17524]: audit 2026-03-09T14:17:09.887180+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:17:11.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:10 vm03 bash[17524]: audit 2026-03-09T14:17:09.887180+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:17:11.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:10 vm03 bash[17524]: audit 2026-03-09T14:17:09.890723+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:11.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:10 vm03 bash[17524]: audit 2026-03-09T14:17:09.890723+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:11.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:10 vm03 bash[17524]: audit 2026-03-09T14:17:09.898420+0000 mon.a (mon.0) 100 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:17:11.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:10 vm03 bash[17524]: audit 2026-03-09T14:17:09.898420+0000 mon.a (mon.0) 100 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:17:11.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:10 vm03 bash[17524]: audit 2026-03-09T14:17:09.902060+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:11.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:10 vm03 bash[17524]: audit 2026-03-09T14:17:09.902060+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:11.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:10 vm03 bash[17524]: audit 2026-03-09T14:17:10.644084+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:11.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:10 vm03 bash[17524]: audit 2026-03-09T14:17:10.644084+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:11.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:10 vm03 bash[17524]: audit 2026-03-09T14:17:10.645054+0000 mon.a (mon.0) 103 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:17:11.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:10 vm03 bash[17524]: audit 2026-03-09T14:17:10.645054+0000 mon.a (mon.0) 103 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:17:11.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:10 vm03 bash[17524]: audit 2026-03-09T14:17:10.646058+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:11.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:10 vm03 bash[17524]: audit 2026-03-09T14:17:10.646058+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:11.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:10 vm03 bash[17524]: audit 2026-03-09T14:17:10.646470+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:17:11.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:10 vm03 bash[17524]: audit 2026-03-09T14:17:10.646470+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:17:11.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:10 vm03 bash[17524]: audit 2026-03-09T14:17:10.795849+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:11.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:10 vm03 bash[17524]: audit 2026-03-09T14:17:10.795849+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:11.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:10 vm03 bash[17524]: audit 2026-03-09T14:17:10.798725+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:11.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:10 vm03 bash[17524]: audit 2026-03-09T14:17:10.798725+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:11.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:10 vm03 bash[17524]: audit 2026-03-09T14:17:10.801120+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:11.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:10 vm03 bash[17524]: audit 2026-03-09T14:17:10.801120+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:12.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:11 vm03 bash[17524]: audit 2026-03-09T14:17:10.640941+0000 mgr.x (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:17:12.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:11 vm03 bash[17524]: audit 2026-03-09T14:17:10.640941+0000 mgr.x (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:17:12.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:11 vm03 bash[17524]: cephadm 2026-03-09T14:17:10.647048+0000 mgr.x (mgr.14150) 11 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-09T14:17:12.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:11 vm03 bash[17524]: cephadm 2026-03-09T14:17:10.647048+0000 mgr.x (mgr.14150) 11 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-09T14:17:12.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:11 vm03 bash[17524]: cephadm 2026-03-09T14:17:10.682973+0000 mgr.x (mgr.14150) 12 : cephadm [INF] Updating vm03:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.conf 2026-03-09T14:17:12.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:11 vm03 bash[17524]: cephadm 2026-03-09T14:17:10.682973+0000 mgr.x (mgr.14150) 12 : cephadm [INF] Updating vm03:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.conf 2026-03-09T14:17:12.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:11 vm03 bash[17524]: cephadm 2026-03-09T14:17:10.727278+0000 mgr.x (mgr.14150) 13 : cephadm [INF] Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:17:12.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:11 vm03 bash[17524]: cephadm 2026-03-09T14:17:10.727278+0000 mgr.x (mgr.14150) 13 : cephadm [INF] Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:17:12.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:11 vm03 bash[17524]: cephadm 2026-03-09T14:17:10.760250+0000 mgr.x (mgr.14150) 14 : cephadm [INF] Updating vm03:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.client.admin.keyring 2026-03-09T14:17:12.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:11 vm03 bash[17524]: cephadm 2026-03-09T14:17:10.760250+0000 mgr.x (mgr.14150) 14 : cephadm [INF] Updating vm03:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.client.admin.keyring 2026-03-09T14:17:15.411 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:17:17.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:16 vm03 bash[17524]: audit 2026-03-09T14:17:15.671986+0000 mgr.x (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm04", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:17:17.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:16 vm03 bash[17524]: audit 2026-03-09T14:17:15.671986+0000 mgr.x (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm04", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:17:17.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:16 vm03 bash[17524]: cephadm 2026-03-09T14:17:16.230711+0000 mgr.x (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm04 2026-03-09T14:17:17.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:16 vm03 bash[17524]: cephadm 2026-03-09T14:17:16.230711+0000 mgr.x (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm04 2026-03-09T14:17:17.530 INFO:teuthology.orchestra.run.vm03.stdout:Added host 'vm04' with addr '192.168.123.104' 2026-03-09T14:17:17.583 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph orch host ls --format=json 2026-03-09T14:17:18.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:18 vm03 bash[17524]: audit 2026-03-09T14:17:17.525535+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:18.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:18 vm03 bash[17524]: audit 2026-03-09T14:17:17.525535+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:18.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:18 vm03 bash[17524]: cephadm 2026-03-09T14:17:17.526385+0000 mgr.x (mgr.14150) 17 : cephadm [INF] Added host vm04 2026-03-09T14:17:18.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:18 vm03 bash[17524]: cephadm 2026-03-09T14:17:17.526385+0000 mgr.x (mgr.14150) 17 : cephadm [INF] Added host vm04 2026-03-09T14:17:18.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:18 vm03 bash[17524]: audit 2026-03-09T14:17:17.526727+0000 mon.a (mon.0) 110 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:17:18.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:18 vm03 bash[17524]: audit 2026-03-09T14:17:17.526727+0000 mon.a (mon.0) 110 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:17:18.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:18 vm03 bash[17524]: audit 2026-03-09T14:17:17.831556+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:18.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:18 vm03 bash[17524]: audit 2026-03-09T14:17:17.831556+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:20.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:20 vm03 bash[17524]: cluster 2026-03-09T14:17:18.908056+0000 mgr.x (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:20.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:20 vm03 bash[17524]: cluster 2026-03-09T14:17:18.908056+0000 mgr.x (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:20.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:20 vm03 bash[17524]: audit 2026-03-09T14:17:19.143733+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:20.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:20 vm03 bash[17524]: audit 2026-03-09T14:17:19.143733+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:20.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:20 vm03 bash[17524]: audit 2026-03-09T14:17:19.757481+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:20.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:20 vm03 bash[17524]: audit 2026-03-09T14:17:19.757481+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:22.198 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:17:22.457 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:17:22.457 INFO:teuthology.orchestra.run.vm03.stdout:[{"addr": "192.168.123.103", "hostname": "vm03", "labels": [], "status": ""}, {"addr": "192.168.123.104", "hostname": "vm04", "labels": [], "status": ""}] 2026-03-09T14:17:22.469 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:22 vm03 bash[17524]: cluster 2026-03-09T14:17:20.908223+0000 mgr.x (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:22.469 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:22 vm03 bash[17524]: cluster 2026-03-09T14:17:20.908223+0000 mgr.x (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:22.513 INFO:tasks.cephadm:Writing (initial) conf and keyring to vm05 2026-03-09T14:17:22.513 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T14:17:22.513 DEBUG:teuthology.orchestra.run.vm05:> dd of=/etc/ceph/ceph.conf 2026-03-09T14:17:22.516 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T14:17:22.516 DEBUG:teuthology.orchestra.run.vm05:> dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:17:22.562 INFO:tasks.cephadm:Adding host vm05 to orchestrator... 2026-03-09T14:17:22.562 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph orch host add vm05 2026-03-09T14:17:23.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:23 vm03 bash[17524]: audit 2026-03-09T14:17:22.453389+0000 mgr.x (mgr.14150) 20 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T14:17:23.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:23 vm03 bash[17524]: audit 2026-03-09T14:17:22.453389+0000 mgr.x (mgr.14150) 20 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T14:17:23.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:23 vm03 bash[17524]: audit 2026-03-09T14:17:22.623678+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:23.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:23 vm03 bash[17524]: audit 2026-03-09T14:17:22.623678+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:23.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:23 vm03 bash[17524]: audit 2026-03-09T14:17:22.625751+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:23.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:23 vm03 bash[17524]: audit 2026-03-09T14:17:22.625751+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:23.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:23 vm03 bash[17524]: audit 2026-03-09T14:17:22.628240+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:23.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:23 vm03 bash[17524]: audit 2026-03-09T14:17:22.628240+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:23.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:23 vm03 bash[17524]: audit 2026-03-09T14:17:22.630020+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:23.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:23 vm03 bash[17524]: audit 2026-03-09T14:17:22.630020+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:23.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:23 vm03 bash[17524]: audit 2026-03-09T14:17:22.630573+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:17:23.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:23 vm03 bash[17524]: audit 2026-03-09T14:17:22.630573+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:17:23.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:23 vm03 bash[17524]: audit 2026-03-09T14:17:22.631224+0000 mon.a (mon.0) 119 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:23.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:23 vm03 bash[17524]: audit 2026-03-09T14:17:22.631224+0000 mon.a (mon.0) 119 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:23.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:23 vm03 bash[17524]: audit 2026-03-09T14:17:22.631667+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:17:23.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:23 vm03 bash[17524]: audit 2026-03-09T14:17:22.631667+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:17:23.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:23 vm03 bash[17524]: audit 2026-03-09T14:17:22.767717+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:23.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:23 vm03 bash[17524]: audit 2026-03-09T14:17:22.767717+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:23.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:23 vm03 bash[17524]: audit 2026-03-09T14:17:22.770949+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:23.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:23 vm03 bash[17524]: audit 2026-03-09T14:17:22.770949+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:23.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:23 vm03 bash[17524]: audit 2026-03-09T14:17:22.773819+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:23.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:23 vm03 bash[17524]: audit 2026-03-09T14:17:22.773819+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:24.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:24 vm03 bash[17524]: cephadm 2026-03-09T14:17:22.632237+0000 mgr.x (mgr.14150) 21 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-09T14:17:24.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:24 vm03 bash[17524]: cephadm 2026-03-09T14:17:22.632237+0000 mgr.x (mgr.14150) 21 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-09T14:17:24.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:24 vm03 bash[17524]: cephadm 2026-03-09T14:17:22.666218+0000 mgr.x (mgr.14150) 22 : cephadm [INF] Updating vm04:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.conf 2026-03-09T14:17:24.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:24 vm03 bash[17524]: cephadm 2026-03-09T14:17:22.666218+0000 mgr.x (mgr.14150) 22 : cephadm [INF] Updating vm04:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.conf 2026-03-09T14:17:24.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:24 vm03 bash[17524]: cephadm 2026-03-09T14:17:22.694870+0000 mgr.x (mgr.14150) 23 : cephadm [INF] Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:17:24.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:24 vm03 bash[17524]: cephadm 2026-03-09T14:17:22.694870+0000 mgr.x (mgr.14150) 23 : cephadm [INF] Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:17:24.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:24 vm03 bash[17524]: cephadm 2026-03-09T14:17:22.729121+0000 mgr.x (mgr.14150) 24 : cephadm [INF] Updating vm04:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.client.admin.keyring 2026-03-09T14:17:24.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:24 vm03 bash[17524]: cephadm 2026-03-09T14:17:22.729121+0000 mgr.x (mgr.14150) 24 : cephadm [INF] Updating vm04:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.client.admin.keyring 2026-03-09T14:17:24.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:24 vm03 bash[17524]: cluster 2026-03-09T14:17:22.908490+0000 mgr.x (mgr.14150) 25 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:24.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:24 vm03 bash[17524]: cluster 2026-03-09T14:17:22.908490+0000 mgr.x (mgr.14150) 25 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:26.207 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:17:26.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:26 vm03 bash[17524]: cluster 2026-03-09T14:17:24.908701+0000 mgr.x (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:26.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:26 vm03 bash[17524]: cluster 2026-03-09T14:17:24.908701+0000 mgr.x (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:27.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:27 vm03 bash[17524]: audit 2026-03-09T14:17:26.462414+0000 mgr.x (mgr.14150) 27 : audit [DBG] from='client.14178 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm05", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:17:27.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:27 vm03 bash[17524]: audit 2026-03-09T14:17:26.462414+0000 mgr.x (mgr.14150) 27 : audit [DBG] from='client.14178 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm05", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:17:28.211 INFO:teuthology.orchestra.run.vm03.stdout:Added host 'vm05' with addr '192.168.123.105' 2026-03-09T14:17:28.260 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph orch host ls --format=json 2026-03-09T14:17:28.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:28 vm03 bash[17524]: cluster 2026-03-09T14:17:26.908859+0000 mgr.x (mgr.14150) 28 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:28.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:28 vm03 bash[17524]: cluster 2026-03-09T14:17:26.908859+0000 mgr.x (mgr.14150) 28 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:28.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:28 vm03 bash[17524]: cephadm 2026-03-09T14:17:26.974228+0000 mgr.x (mgr.14150) 29 : cephadm [INF] Deploying cephadm binary to vm05 2026-03-09T14:17:28.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:28 vm03 bash[17524]: cephadm 2026-03-09T14:17:26.974228+0000 mgr.x (mgr.14150) 29 : cephadm [INF] Deploying cephadm binary to vm05 2026-03-09T14:17:29.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:29 vm03 bash[17524]: audit 2026-03-09T14:17:28.205516+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:29.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:29 vm03 bash[17524]: audit 2026-03-09T14:17:28.205516+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:29.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:29 vm03 bash[17524]: cephadm 2026-03-09T14:17:28.205811+0000 mgr.x (mgr.14150) 30 : cephadm [INF] Added host vm05 2026-03-09T14:17:29.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:29 vm03 bash[17524]: cephadm 2026-03-09T14:17:28.205811+0000 mgr.x (mgr.14150) 30 : cephadm [INF] Added host vm05 2026-03-09T14:17:29.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:29 vm03 bash[17524]: audit 2026-03-09T14:17:28.205991+0000 mon.a (mon.0) 125 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:17:29.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:29 vm03 bash[17524]: audit 2026-03-09T14:17:28.205991+0000 mon.a (mon.0) 125 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:17:29.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:29 vm03 bash[17524]: audit 2026-03-09T14:17:28.515106+0000 mon.a (mon.0) 126 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:29.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:29 vm03 bash[17524]: audit 2026-03-09T14:17:28.515106+0000 mon.a (mon.0) 126 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:30.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:30 vm03 bash[17524]: cluster 2026-03-09T14:17:28.909016+0000 mgr.x (mgr.14150) 31 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:30.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:30 vm03 bash[17524]: cluster 2026-03-09T14:17:28.909016+0000 mgr.x (mgr.14150) 31 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:30.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:30 vm03 bash[17524]: audit 2026-03-09T14:17:29.789822+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:30.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:30 vm03 bash[17524]: audit 2026-03-09T14:17:29.789822+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:31.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:31 vm03 bash[17524]: audit 2026-03-09T14:17:30.391186+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:31.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:31 vm03 bash[17524]: audit 2026-03-09T14:17:30.391186+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:32.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:32 vm03 bash[17524]: cluster 2026-03-09T14:17:30.909170+0000 mgr.x (mgr.14150) 32 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:32.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:32 vm03 bash[17524]: cluster 2026-03-09T14:17:30.909170+0000 mgr.x (mgr.14150) 32 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:32.871 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:17:33.113 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:17:33.113 INFO:teuthology.orchestra.run.vm03.stdout:[{"addr": "192.168.123.103", "hostname": "vm03", "labels": [], "status": ""}, {"addr": "192.168.123.104", "hostname": "vm04", "labels": [], "status": ""}, {"addr": "192.168.123.105", "hostname": "vm05", "labels": [], "status": ""}] 2026-03-09T14:17:33.165 INFO:tasks.cephadm:Setting crush tunables to default 2026-03-09T14:17:33.165 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph osd crush tunables default 2026-03-09T14:17:34.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:34 vm03 bash[17524]: cluster 2026-03-09T14:17:32.909332+0000 mgr.x (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:34.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:34 vm03 bash[17524]: cluster 2026-03-09T14:17:32.909332+0000 mgr.x (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:34.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:34 vm03 bash[17524]: audit 2026-03-09T14:17:33.109243+0000 mgr.x (mgr.14150) 34 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T14:17:34.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:34 vm03 bash[17524]: audit 2026-03-09T14:17:33.109243+0000 mgr.x (mgr.14150) 34 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T14:17:34.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:34 vm03 bash[17524]: audit 2026-03-09T14:17:33.178191+0000 mon.a (mon.0) 129 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:34.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:34 vm03 bash[17524]: audit 2026-03-09T14:17:33.178191+0000 mon.a (mon.0) 129 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:34.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:34 vm03 bash[17524]: audit 2026-03-09T14:17:33.180616+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:34.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:34 vm03 bash[17524]: audit 2026-03-09T14:17:33.180616+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:34.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:34 vm03 bash[17524]: audit 2026-03-09T14:17:33.184035+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:34.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:34 vm03 bash[17524]: audit 2026-03-09T14:17:33.184035+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:34.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:34 vm03 bash[17524]: audit 2026-03-09T14:17:33.186492+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:34.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:34 vm03 bash[17524]: audit 2026-03-09T14:17:33.186492+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:34.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:34 vm03 bash[17524]: audit 2026-03-09T14:17:33.187067+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:17:34.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:34 vm03 bash[17524]: audit 2026-03-09T14:17:33.187067+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:17:34.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:34 vm03 bash[17524]: audit 2026-03-09T14:17:33.187702+0000 mon.a (mon.0) 134 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:34.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:34 vm03 bash[17524]: audit 2026-03-09T14:17:33.187702+0000 mon.a (mon.0) 134 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:34.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:34 vm03 bash[17524]: audit 2026-03-09T14:17:33.188091+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:17:34.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:34 vm03 bash[17524]: audit 2026-03-09T14:17:33.188091+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:17:34.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:34 vm03 bash[17524]: cephadm 2026-03-09T14:17:33.188632+0000 mgr.x (mgr.14150) 35 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-09T14:17:34.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:34 vm03 bash[17524]: cephadm 2026-03-09T14:17:33.188632+0000 mgr.x (mgr.14150) 35 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-09T14:17:34.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:34 vm03 bash[17524]: cephadm 2026-03-09T14:17:33.218809+0000 mgr.x (mgr.14150) 36 : cephadm [INF] Updating vm05:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.conf 2026-03-09T14:17:34.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:34 vm03 bash[17524]: cephadm 2026-03-09T14:17:33.218809+0000 mgr.x (mgr.14150) 36 : cephadm [INF] Updating vm05:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.conf 2026-03-09T14:17:34.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:34 vm03 bash[17524]: cephadm 2026-03-09T14:17:33.246676+0000 mgr.x (mgr.14150) 37 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:17:34.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:34 vm03 bash[17524]: cephadm 2026-03-09T14:17:33.246676+0000 mgr.x (mgr.14150) 37 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:17:34.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:34 vm03 bash[17524]: cephadm 2026-03-09T14:17:33.280500+0000 mgr.x (mgr.14150) 38 : cephadm [INF] Updating vm05:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.client.admin.keyring 2026-03-09T14:17:34.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:34 vm03 bash[17524]: cephadm 2026-03-09T14:17:33.280500+0000 mgr.x (mgr.14150) 38 : cephadm [INF] Updating vm05:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.client.admin.keyring 2026-03-09T14:17:34.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:34 vm03 bash[17524]: audit 2026-03-09T14:17:33.312508+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:34.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:34 vm03 bash[17524]: audit 2026-03-09T14:17:33.312508+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:34.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:34 vm03 bash[17524]: audit 2026-03-09T14:17:33.314376+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:34.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:34 vm03 bash[17524]: audit 2026-03-09T14:17:33.314376+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:34.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:34 vm03 bash[17524]: audit 2026-03-09T14:17:33.316275+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:34.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:34 vm03 bash[17524]: audit 2026-03-09T14:17:33.316275+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:36.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:36 vm03 bash[17524]: cluster 2026-03-09T14:17:34.909481+0000 mgr.x (mgr.14150) 39 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:36.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:36 vm03 bash[17524]: cluster 2026-03-09T14:17:34.909481+0000 mgr.x (mgr.14150) 39 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:36.880 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:17:37.189 INFO:teuthology.orchestra.run.vm03.stderr:adjusted tunables profile to default 2026-03-09T14:17:37.251 INFO:tasks.cephadm:Adding mon.a on vm03 2026-03-09T14:17:37.251 INFO:tasks.cephadm:Adding mon.b on vm04 2026-03-09T14:17:37.251 INFO:tasks.cephadm:Adding mon.c on vm05 2026-03-09T14:17:37.251 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph orch apply mon '3;vm03:192.168.123.103=a;vm04:192.168.123.104=b;vm05:192.168.123.105=c' 2026-03-09T14:17:37.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:37 vm03 bash[17524]: audit 2026-03-09T14:17:37.130449+0000 mon.a (mon.0) 139 : audit [INF] from='client.? 192.168.123.103:0/807970424' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-09T14:17:37.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:37 vm03 bash[17524]: audit 2026-03-09T14:17:37.130449+0000 mon.a (mon.0) 139 : audit [INF] from='client.? 192.168.123.103:0/807970424' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-09T14:17:38.365 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.conf 2026-03-09T14:17:38.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:38 vm03 bash[17524]: cluster 2026-03-09T14:17:36.909657+0000 mgr.x (mgr.14150) 40 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:38.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:38 vm03 bash[17524]: cluster 2026-03-09T14:17:36.909657+0000 mgr.x (mgr.14150) 40 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:38.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:38 vm03 bash[17524]: audit 2026-03-09T14:17:37.186313+0000 mon.a (mon.0) 140 : audit [INF] from='client.? 192.168.123.103:0/807970424' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-09T14:17:38.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:38 vm03 bash[17524]: audit 2026-03-09T14:17:37.186313+0000 mon.a (mon.0) 140 : audit [INF] from='client.? 192.168.123.103:0/807970424' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-09T14:17:38.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:38 vm03 bash[17524]: cluster 2026-03-09T14:17:37.187398+0000 mon.a (mon.0) 141 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T14:17:38.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:38 vm03 bash[17524]: cluster 2026-03-09T14:17:37.187398+0000 mon.a (mon.0) 141 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T14:17:38.640 INFO:teuthology.orchestra.run.vm05.stdout:Scheduled mon update... 2026-03-09T14:17:38.751 DEBUG:teuthology.orchestra.run.vm04:mon.b> sudo journalctl -f -n 0 -u ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@mon.b.service 2026-03-09T14:17:38.753 DEBUG:teuthology.orchestra.run.vm05:mon.c> sudo journalctl -f -n 0 -u ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@mon.c.service 2026-03-09T14:17:38.753 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-09T14:17:38.754 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph mon dump -f json 2026-03-09T14:17:39.912 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.c/config 2026-03-09T14:17:40.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:39 vm03 bash[17524]: audit 2026-03-09T14:17:38.632013+0000 mgr.x (mgr.14150) 41 : audit [DBG] from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm03:192.168.123.103=a;vm04:192.168.123.104=b;vm05:192.168.123.105=c", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:17:40.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:39 vm03 bash[17524]: audit 2026-03-09T14:17:38.632013+0000 mgr.x (mgr.14150) 41 : audit [DBG] from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm03:192.168.123.103=a;vm04:192.168.123.104=b;vm05:192.168.123.105=c", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:17:40.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:39 vm03 bash[17524]: cephadm 2026-03-09T14:17:38.633069+0000 mgr.x (mgr.14150) 42 : cephadm [INF] Saving service mon spec with placement vm03:192.168.123.103=a;vm04:192.168.123.104=b;vm05:192.168.123.105=c;count:3 2026-03-09T14:17:40.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:39 vm03 bash[17524]: cephadm 2026-03-09T14:17:38.633069+0000 mgr.x (mgr.14150) 42 : cephadm [INF] Saving service mon spec with placement vm03:192.168.123.103=a;vm04:192.168.123.104=b;vm05:192.168.123.105=c;count:3 2026-03-09T14:17:40.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:39 vm03 bash[17524]: audit 2026-03-09T14:17:38.635887+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:39 vm03 bash[17524]: audit 2026-03-09T14:17:38.635887+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:39 vm03 bash[17524]: audit 2026-03-09T14:17:38.636581+0000 mon.a (mon.0) 143 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:17:40.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:39 vm03 bash[17524]: audit 2026-03-09T14:17:38.636581+0000 mon.a (mon.0) 143 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:17:40.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:39 vm03 bash[17524]: audit 2026-03-09T14:17:38.637412+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:40.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:39 vm03 bash[17524]: audit 2026-03-09T14:17:38.637412+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:40.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:39 vm03 bash[17524]: audit 2026-03-09T14:17:38.637779+0000 mon.a (mon.0) 145 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:17:40.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:39 vm03 bash[17524]: audit 2026-03-09T14:17:38.637779+0000 mon.a (mon.0) 145 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:17:40.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:39 vm03 bash[17524]: audit 2026-03-09T14:17:38.640204+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:39 vm03 bash[17524]: audit 2026-03-09T14:17:38.640204+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:39 vm03 bash[17524]: audit 2026-03-09T14:17:38.641303+0000 mon.a (mon.0) 147 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:17:40.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:39 vm03 bash[17524]: audit 2026-03-09T14:17:38.641303+0000 mon.a (mon.0) 147 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:17:40.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:39 vm03 bash[17524]: audit 2026-03-09T14:17:38.641647+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:40.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:39 vm03 bash[17524]: audit 2026-03-09T14:17:38.641647+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:40.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:39 vm03 bash[17524]: cephadm 2026-03-09T14:17:38.642107+0000 mgr.x (mgr.14150) 43 : cephadm [INF] Deploying daemon mon.c on vm05 2026-03-09T14:17:40.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:39 vm03 bash[17524]: cephadm 2026-03-09T14:17:38.642107+0000 mgr.x (mgr.14150) 43 : cephadm [INF] Deploying daemon mon.c on vm05 2026-03-09T14:17:40.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:39 vm03 bash[17524]: cluster 2026-03-09T14:17:38.909816+0000 mgr.x (mgr.14150) 44 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:40.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:39 vm03 bash[17524]: cluster 2026-03-09T14:17:38.909816+0000 mgr.x (mgr.14150) 44 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:40.209 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 systemd[1]: Started Ceph mon.c for 3346de4a-1bc2-11f1-95ae-3796c8433614. 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 0 ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 8 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 0 pidfile_write: ignore empty --pid-file 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 0 load: jerasure load: lrc 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: RocksDB version: 7.9.2 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Git sha 0 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: DB SUMMARY 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: DB Session ID: EU4KIYVNTQQC925UP0O6 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: CURRENT file: CURRENT 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-c/store.db dir, Total Num: 0, files: 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-c/store.db: 000004.log size: 511 ; 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.error_if_exists: 0 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.create_if_missing: 0 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.paranoid_checks: 1 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.env: 0x558a70e7cdc0 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.info_log: 0x558aafc18700 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.statistics: (nil) 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.use_fsync: 0 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.max_log_file_size: 0 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.allow_fallocate: 1 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.use_direct_reads: 0 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.db_log_dir: 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.wal_dir: 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T14:17:40.515 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.write_buffer_manager: 0x558aafc1d900 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.unordered_write: 0 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.row_cache: None 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.wal_filter: None 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.two_write_queues: 0 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.wal_compression: 0 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.atomic_flush: 0 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.log_readahead_size: 0 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.max_background_jobs: 2 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.max_background_compactions: -1 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.max_subcompactions: 1 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.max_open_files: -1 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.max_background_flushes: -1 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Compression algorithms supported: 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: kZSTD supported: 0 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: kXpressCompression supported: 0 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: kBZip2Compression supported: 0 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: kLZ4Compression supported: 1 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: kZlibCompression supported: 1 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: kSnappyCompression supported: 1 2026-03-09T14:17:40.516 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-c/store.db/MANIFEST-000005 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.merge_operator: 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.compaction_filter: None 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558aafc18640) 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cache_index_and_filter_blocks: 1 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: pin_top_level_index_and_filter: 1 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: index_type: 0 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: data_block_index_type: 0 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: index_shortening: 1 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: data_block_hash_table_util_ratio: 0.750000 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: checksum: 4 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: no_block_cache: 0 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: block_cache: 0x558aafc3f350 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: block_cache_name: BinnedLRUCache 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: block_cache_options: 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: capacity : 536870912 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: num_shard_bits : 4 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: strict_capacity_limit : 0 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: high_pri_pool_ratio: 0.000 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: block_cache_compressed: (nil) 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: persistent_cache: (nil) 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: block_size: 4096 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: block_size_deviation: 10 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: block_restart_interval: 16 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: index_block_restart_interval: 1 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: metadata_block_size: 4096 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: partition_filters: 0 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: use_delta_encoding: 1 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: filter_policy: bloomfilter 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: whole_key_filtering: 1 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: verify_compression: 0 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: read_amp_bytes_per_bit: 0 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: format_version: 5 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: enable_index_compression: 1 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: block_align: 0 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: max_auto_readahead_size: 262144 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: prepopulate_block_cache: 0 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: initial_auto_readahead_size: 8192 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: num_file_reads_for_auto_readahead: 2 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.compression: NoCompression 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.num_levels: 7 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T14:17:40.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.inplace_update_support: 0 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.bloom_locality: 0 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.max_successive_merges: 0 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.ttl: 2592000 2026-03-09T14:17:40.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T14:17:40.519 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-09T14:17:40.519 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-09T14:17:40.519 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.enable_blob_files: false 2026-03-09T14:17:40.519 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.min_blob_size: 0 2026-03-09T14:17:40.519 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-09T14:17:40.519 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T14:17:40.519 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T14:17:40.519 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T14:17:40.519 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-09T14:17:40.519 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-09T14:17:40.519 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-09T14:17:40.519 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.238+0000 7f32454cdd80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-09T14:17:40.519 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.242+0000 7f32454cdd80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-c/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-09T14:17:40.519 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.242+0000 7f32454cdd80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 2026-03-09T14:17:40.519 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.242+0000 7f32454cdd80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d69c5895-21df-40ce-84c6-c1bac82cf306 2026-03-09T14:17:40.519 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.250+0000 7f32454cdd80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773065860256404, "job": 1, "event": "recovery_started", "wal_files": [4]} 2026-03-09T14:17:40.519 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.250+0000 7f32454cdd80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 2026-03-09T14:17:40.519 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.250+0000 7f32454cdd80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773065860257543, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1643, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773065860, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d69c5895-21df-40ce-84c6-c1bac82cf306", "db_session_id": "EU4KIYVNTQQC925UP0O6", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} 2026-03-09T14:17:40.519 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.250+0000 7f32454cdd80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773065860257593, "job": 1, "event": "recovery_finished"} 2026-03-09T14:17:40.519 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.250+0000 7f32454cdd80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 10 2026-03-09T14:17:40.519 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.254+0000 7f32454cdd80 4 rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-c/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-09T14:17:40.519 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.254+0000 7f32454cdd80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x558aafc40e00 2026-03-09T14:17:40.519 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.254+0000 7f32454cdd80 4 rocksdb: DB pointer 0x558aafd56000 2026-03-09T14:17:40.519 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.254+0000 7f32454cdd80 0 mon.c does not exist in monmap, will attempt to join an existing cluster 2026-03-09T14:17:40.519 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.254+0000 7f32454cdd80 0 using public_addr v2:192.168.123.105:0/0 -> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] 2026-03-09T14:17:40.519 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.254+0000 7f323b297640 4 rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-09T14:17:40.519 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.254+0000 7f323b297640 4 rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-09T14:17:40.519 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: ** DB Stats ** 2026-03-09T14:17:40.519 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T14:17:40.519 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-09T14:17:40.519 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T14:17:40.519 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: ** Compaction Stats [default] ** 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: L0 1/0 1.60 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 1.4 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: Sum 1/0 1.60 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 1.4 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 1.4 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: ** Compaction Stats [default] ** 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.4 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: Flush(GB): cumulative 0.000, interval 0.000 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: AddFile(Total Files): cumulative 0, interval 0 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: AddFile(L0 Files): cumulative 0, interval 0 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: AddFile(Keys): cumulative 0, interval 0 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: Cumulative compaction: 0.00 GB write, 0.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: Interval compaction: 0.00 GB write, 0.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: Block cache BinnedLRUCache@0x558aafc3f350#8 capacity: 512.00 MB usage: 0.86 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2e-05 secs_since: 0 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: Block cache entry stats(count,size,portion): DataBlock(1,0.64 KB,0.00012219%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%) 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: ** File Read Latency Histogram By Level [default] ** 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.258+0000 7f32454cdd80 0 starting mon.c rank -1 at public addrs [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] at bind addrs [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon_data /var/lib/ceph/mon/ceph-c fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.258+0000 7f32454cdd80 1 mon.c@-1(???) e0 preinit fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.274+0000 7f323e29d640 0 mon.c@-1(synchronizing).mds e1 new map 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.274+0000 7f323e29d640 0 mon.c@-1(synchronizing).mds e1 print_map 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: e1 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: btime 2026-03-09T14:16:38:208797+0000 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: legacy client fscid: -1 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: No filesystems configured 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.274+0000 7f323e29d640 1 mon.c@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.274+0000 7f323e29d640 1 mon.c@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.274+0000 7f323e29d640 1 mon.c@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.278+0000 7f323e29d640 1 mon.c@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.278+0000 7f323e29d640 1 mon.c@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.278+0000 7f323e29d640 1 mon.c@-1(synchronizing).osd e4 e4: 0 total, 0 up, 0 in 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.278+0000 7f323e29d640 0 mon.c@-1(synchronizing).osd e4 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.278+0000 7f323e29d640 0 mon.c@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.278+0000 7f323e29d640 0 mon.c@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.278+0000 7f323e29d640 0 mon.c@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:38.209592+0000 mon.a (mon.0) 0 : cluster [INF] mkfs 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:38.209592+0000 mon.a (mon.0) 0 : cluster [INF] mkfs 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:38.202207+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T14:17:40.520 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:38.202207+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:39.224957+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:39.224957+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:39.225007+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:39.225007+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:39.225013+0000 mon.a (mon.0) 3 : cluster [DBG] fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:39.225013+0000 mon.a (mon.0) 3 : cluster [DBG] fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:39.225016+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-09T14:16:36.936076+0000 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:39.225016+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-09T14:16:36.936076+0000 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:39.225027+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-09T14:16:36.936076+0000 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:39.225027+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-09T14:16:36.936076+0000 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:39.225030+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:39.225030+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:39.225036+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:39.225036+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:39.225039+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:39.225039+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:39.225308+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:39.225308+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:39.225325+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:39.225325+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:39.225837+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:39.225837+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:39.289533+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.103:0/1977463852' entity='client.admin' 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:39.289533+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.103:0/1977463852' entity='client.admin' 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:39.937384+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.103:0/81453206' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:39.937384+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.103:0/81453206' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:42.186417+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.103:0/2167707565' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:42.186417+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.103:0/2167707565' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:43.296492+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon x 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:43.296492+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon x 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:43.300744+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: x(active, starting, since 0.00437062s) 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:43.300744+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: x(active, starting, since 0.00437062s) 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:43.303504+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14101 192.168.123.103:0/2835702550' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:43.303504+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14101 192.168.123.103:0/2835702550' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:43.303608+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14101 192.168.123.103:0/2835702550' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:43.303608+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14101 192.168.123.103:0/2835702550' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:43.303698+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14101 192.168.123.103:0/2835702550' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:43.303698+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14101 192.168.123.103:0/2835702550' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:43.304110+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14101 192.168.123.103:0/2835702550' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:43.304110+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14101 192.168.123.103:0/2835702550' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:43.304728+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14101 192.168.123.103:0/2835702550' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:43.304728+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14101 192.168.123.103:0/2835702550' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:43.309856+0000 mon.a (mon.0) 22 : cluster [INF] Manager daemon x is now available 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:43.309856+0000 mon.a (mon.0) 22 : cluster [INF] Manager daemon x is now available 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:43.319910+0000 mon.a (mon.0) 23 : audit [INF] from='mgr.14101 192.168.123.103:0/2835702550' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:43.319910+0000 mon.a (mon.0) 23 : audit [INF] from='mgr.14101 192.168.123.103:0/2835702550' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:43.321581+0000 mon.a (mon.0) 24 : audit [INF] from='mgr.14101 192.168.123.103:0/2835702550' entity='mgr.x' 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:43.321581+0000 mon.a (mon.0) 24 : audit [INF] from='mgr.14101 192.168.123.103:0/2835702550' entity='mgr.x' 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:43.323814+0000 mon.a (mon.0) 25 : audit [INF] from='mgr.14101 192.168.123.103:0/2835702550' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:43.323814+0000 mon.a (mon.0) 25 : audit [INF] from='mgr.14101 192.168.123.103:0/2835702550' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:43.325302+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14101 192.168.123.103:0/2835702550' entity='mgr.x' 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:43.325302+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14101 192.168.123.103:0/2835702550' entity='mgr.x' 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:43.327577+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14101 192.168.123.103:0/2835702550' entity='mgr.x' 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:43.327577+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14101 192.168.123.103:0/2835702550' entity='mgr.x' 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:44.308704+0000 mon.a (mon.0) 28 : cluster [DBG] mgrmap e3: x(active, since 1.01233s) 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:44.308704+0000 mon.a (mon.0) 28 : cluster [DBG] mgrmap e3: x(active, since 1.01233s) 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:44.534433+0000 mon.a (mon.0) 29 : audit [DBG] from='client.? 192.168.123.103:0/1837038701' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:44.534433+0000 mon.a (mon.0) 29 : audit [DBG] from='client.? 192.168.123.103:0/1837038701' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:44.789985+0000 mon.a (mon.0) 30 : audit [INF] from='client.? 192.168.123.103:0/3267228195' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:44.789985+0000 mon.a (mon.0) 30 : audit [INF] from='client.? 192.168.123.103:0/3267228195' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-09T14:17:40.521 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:45.092672+0000 mon.a (mon.0) 31 : audit [INF] from='client.? 192.168.123.103:0/3331532943' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-09T14:17:40.522 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:45.092672+0000 mon.a (mon.0) 31 : audit [INF] from='client.? 192.168.123.103:0/3331532943' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-09T14:17:40.522 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:45.310594+0000 mon.a (mon.0) 32 : audit [INF] from='client.? 192.168.123.103:0/3331532943' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-09T14:17:40.522 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:45.310594+0000 mon.a (mon.0) 32 : audit [INF] from='client.? 192.168.123.103:0/3331532943' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-09T14:17:40.522 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:45.313754+0000 mon.a (mon.0) 33 : cluster [DBG] mgrmap e4: x(active, since 2s) 2026-03-09T14:17:40.522 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:45.313754+0000 mon.a (mon.0) 33 : cluster [DBG] mgrmap e4: x(active, since 2s) 2026-03-09T14:17:40.522 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:45.938537+0000 mon.a (mon.0) 34 : audit [DBG] from='client.? 192.168.123.103:0/1601037514' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T14:17:40.522 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:45.938537+0000 mon.a (mon.0) 34 : audit [DBG] from='client.? 192.168.123.103:0/1601037514' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T14:17:40.522 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:48.640051+0000 mon.a (mon.0) 35 : cluster [INF] Active manager daemon x restarted 2026-03-09T14:17:40.522 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:48.640051+0000 mon.a (mon.0) 35 : cluster [INF] Active manager daemon x restarted 2026-03-09T14:17:40.522 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:48.640315+0000 mon.a (mon.0) 36 : cluster [INF] Activating manager daemon x 2026-03-09T14:17:40.522 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:48.640315+0000 mon.a (mon.0) 36 : cluster [INF] Activating manager daemon x 2026-03-09T14:17:40.522 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:48.645216+0000 mon.a (mon.0) 37 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-09T14:17:40.522 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:48.645216+0000 mon.a (mon.0) 37 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-09T14:17:40.522 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:48.645391+0000 mon.a (mon.0) 38 : cluster [DBG] mgrmap e5: x(active, starting, since 0.00518238s) 2026-03-09T14:17:40.522 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:48.645391+0000 mon.a (mon.0) 38 : cluster [DBG] mgrmap e5: x(active, starting, since 0.00518238s) 2026-03-09T14:17:40.522 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:48.647831+0000 mon.a (mon.0) 39 : audit [DBG] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:17:40.522 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:48.647831+0000 mon.a (mon.0) 39 : audit [DBG] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:17:40.522 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:48.648704+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T14:17:40.522 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:48.648704+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T14:17:40.522 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:48.649122+0000 mon.a (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T14:17:40.522 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:48.649122+0000 mon.a (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T14:17:40.522 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:48.649277+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T14:17:40.522 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:48.649277+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T14:17:40.522 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:48.649394+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T14:17:40.522 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:48.649394+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T14:17:40.522 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:48.655143+0000 mon.a (mon.0) 44 : cluster [INF] Manager daemon x is now available 2026-03-09T14:17:40.522 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:48.655143+0000 mon.a (mon.0) 44 : cluster [INF] Manager daemon x is now available 2026-03-09T14:17:40.522 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:48.663186+0000 mon.a (mon.0) 45 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:17:40.522 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:48.663186+0000 mon.a (mon.0) 45 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:17:40.522 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:48.667384+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:17:40.522 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:48.667384+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:17:40.522 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:48.682063+0000 mon.a (mon.0) 47 : audit [DBG] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:17:40.522 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:48.682063+0000 mon.a (mon.0) 47 : audit [DBG] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:17:40.522 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:48.683570+0000 mon.a (mon.0) 48 : audit [DBG] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:17:40.522 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:48.683570+0000 mon.a (mon.0) 48 : audit [DBG] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:17:40.522 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:48.685800+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:17:40.522 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:48.685800+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:16:48.660325+0000 mgr.x (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:16:48.660325+0000 mgr.x (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:48.693408+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:48.693408+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:49.193147+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:49.193147+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:49.259784+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:49.259784+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:49.651824+0000 mon.a (mon.0) 53 : cluster [DBG] mgrmap e6: x(active, since 1.01162s) 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:49.651824+0000 mon.a (mon.0) 53 : cluster [DBG] mgrmap e6: x(active, since 1.01162s) 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:49.652673+0000 mgr.x (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:49.652673+0000 mgr.x (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:49.657130+0000 mgr.x (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:49.657130+0000 mgr.x (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:50.032145+0000 mgr.x (mgr.14118) 4 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:50.032145+0000 mgr.x (mgr.14118) 4 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:50.035978+0000 mon.a (mon.0) 54 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:50.035978+0000 mon.a (mon.0) 54 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:50.042394+0000 mon.a (mon.0) 55 : audit [DBG] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:50.042394+0000 mon.a (mon.0) 55 : audit [DBG] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:16:50.060122+0000 mgr.x (mgr.14118) 5 : cephadm [INF] [09/Mar/2026:14:16:50] ENGINE Bus STARTING 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:16:50.060122+0000 mgr.x (mgr.14118) 5 : cephadm [INF] [09/Mar/2026:14:16:50] ENGINE Bus STARTING 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:16:50.171951+0000 mgr.x (mgr.14118) 6 : cephadm [INF] [09/Mar/2026:14:16:50] ENGINE Serving on https://192.168.123.103:7150 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:16:50.171951+0000 mgr.x (mgr.14118) 6 : cephadm [INF] [09/Mar/2026:14:16:50] ENGINE Serving on https://192.168.123.103:7150 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:16:50.172410+0000 mgr.x (mgr.14118) 7 : cephadm [INF] [09/Mar/2026:14:16:50] ENGINE Client ('192.168.123.103', 58170) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:16:50.172410+0000 mgr.x (mgr.14118) 7 : cephadm [INF] [09/Mar/2026:14:16:50] ENGINE Client ('192.168.123.103', 58170) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:16:50.272937+0000 mgr.x (mgr.14118) 8 : cephadm [INF] [09/Mar/2026:14:16:50] ENGINE Serving on http://192.168.123.103:8765 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:16:50.272937+0000 mgr.x (mgr.14118) 8 : cephadm [INF] [09/Mar/2026:14:16:50] ENGINE Serving on http://192.168.123.103:8765 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:16:50.273227+0000 mgr.x (mgr.14118) 9 : cephadm [INF] [09/Mar/2026:14:16:50] ENGINE Bus STARTED 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:16:50.273227+0000 mgr.x (mgr.14118) 9 : cephadm [INF] [09/Mar/2026:14:16:50] ENGINE Bus STARTED 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:50.273709+0000 mon.a (mon.0) 56 : audit [DBG] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:50.273709+0000 mon.a (mon.0) 56 : audit [DBG] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:50.722898+0000 mon.a (mon.0) 57 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:50.722898+0000 mon.a (mon.0) 57 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:50.724822+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:50.724822+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:50.413675+0000 mgr.x (mgr.14118) 10 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:50.413675+0000 mgr.x (mgr.14118) 10 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:50.699736+0000 mgr.x (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:50.699736+0000 mgr.x (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:16:50.699976+0000 mgr.x (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:16:50.699976+0000 mgr.x (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:51.000306+0000 mgr.x (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:51.000306+0000 mgr.x (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:51.053079+0000 mon.a (mon.0) 59 : cluster [DBG] mgrmap e7: x(active, since 2s) 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:51.053079+0000 mon.a (mon.0) 59 : cluster [DBG] mgrmap e7: x(active, since 2s) 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:51.279729+0000 mgr.x (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm03", "addr": "192.168.123.103", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:17:40.523 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:51.279729+0000 mgr.x (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm03", "addr": "192.168.123.103", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:16:51.850777+0000 mgr.x (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm03 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:16:51.850777+0000 mgr.x (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm03 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:53.208511+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:53.208511+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:16:53.208964+0000 mgr.x (mgr.14118) 16 : cephadm [INF] Added host vm03 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:16:53.208964+0000 mgr.x (mgr.14118) 16 : cephadm [INF] Added host vm03 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:53.212192+0000 mon.a (mon.0) 61 : audit [DBG] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:53.212192+0000 mon.a (mon.0) 61 : audit [DBG] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:53.574770+0000 mon.a (mon.0) 62 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:53.574770+0000 mon.a (mon.0) 62 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:53.861980+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:53.861980+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:54.136352+0000 mon.a (mon.0) 64 : audit [INF] from='client.? 192.168.123.103:0/2564945460' entity='client.admin' 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:54.136352+0000 mon.a (mon.0) 64 : audit [INF] from='client.? 192.168.123.103:0/2564945460' entity='client.admin' 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:53.570818+0000 mgr.x (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:53.570818+0000 mgr.x (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:16:53.571754+0000 mgr.x (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:16:53.571754+0000 mgr.x (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:53.858061+0000 mgr.x (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:53.858061+0000 mgr.x (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:16:53.858814+0000 mgr.x (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:16:53.858814+0000 mgr.x (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:54.408466+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.103:0/4028812733' entity='client.admin' 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:54.408466+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.103:0/4028812733' entity='client.admin' 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:54.748323+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.103:0/754411936' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:54.748323+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.103:0/754411936' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:55.050189+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:55.050189+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:55.385143+0000 mon.a (mon.0) 68 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:55.385143+0000 mon.a (mon.0) 68 : audit [INF] from='mgr.14118 192.168.123.103:0/615101237' entity='mgr.x' 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:55.409689+0000 mon.a (mon.0) 69 : audit [INF] from='client.? 192.168.123.103:0/754411936' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:55.409689+0000 mon.a (mon.0) 69 : audit [INF] from='client.? 192.168.123.103:0/754411936' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:55.412099+0000 mon.a (mon.0) 70 : cluster [DBG] mgrmap e8: x(active, since 6s) 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:55.412099+0000 mon.a (mon.0) 70 : cluster [DBG] mgrmap e8: x(active, since 6s) 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:55.910446+0000 mon.a (mon.0) 71 : audit [DBG] from='client.? 192.168.123.103:0/2250228207' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:55.910446+0000 mon.a (mon.0) 71 : audit [DBG] from='client.? 192.168.123.103:0/2250228207' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:58.892737+0000 mon.a (mon.0) 72 : cluster [INF] Active manager daemon x restarted 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:58.892737+0000 mon.a (mon.0) 72 : cluster [INF] Active manager daemon x restarted 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:58.893165+0000 mon.a (mon.0) 73 : cluster [INF] Activating manager daemon x 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:58.893165+0000 mon.a (mon.0) 73 : cluster [INF] Activating manager daemon x 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:58.898531+0000 mon.a (mon.0) 74 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:58.898531+0000 mon.a (mon.0) 74 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:58.898752+0000 mon.a (mon.0) 75 : cluster [DBG] mgrmap e9: x(active, starting, since 0.00568639s) 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:58.898752+0000 mon.a (mon.0) 75 : cluster [DBG] mgrmap e9: x(active, starting, since 0.00568639s) 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:58.903902+0000 mon.a (mon.0) 76 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:58.903902+0000 mon.a (mon.0) 76 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:58.904900+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:58.904900+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:58.905958+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:58.905958+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:58.906377+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:58.906377+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:58.906791+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:58.906791+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:58.913076+0000 mon.a (mon.0) 81 : cluster [INF] Manager daemon x is now available 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:58.913076+0000 mon.a (mon.0) 81 : cluster [INF] Manager daemon x is now available 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:58.936530+0000 mon.a (mon.0) 82 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:58.936530+0000 mon.a (mon.0) 82 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:58.965999+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:58.965999+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:58.975877+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-09T14:17:40.524 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:58.975877+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:59.902012+0000 mon.a (mon.0) 85 : cluster [DBG] mgrmap e10: x(active, since 1.00895s) 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:16:59.902012+0000 mon.a (mon.0) 85 : cluster [DBG] mgrmap e10: x(active, since 1.00895s) 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:16:59.623059+0000 mgr.x (mgr.14150) 1 : cephadm [INF] [09/Mar/2026:14:16:59] ENGINE Bus STARTING 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:16:59.623059+0000 mgr.x (mgr.14150) 1 : cephadm [INF] [09/Mar/2026:14:16:59] ENGINE Bus STARTING 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:16:59.724715+0000 mgr.x (mgr.14150) 2 : cephadm [INF] [09/Mar/2026:14:16:59] ENGINE Serving on http://192.168.123.103:8765 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:16:59.724715+0000 mgr.x (mgr.14150) 2 : cephadm [INF] [09/Mar/2026:14:16:59] ENGINE Serving on http://192.168.123.103:8765 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:16:59.834905+0000 mgr.x (mgr.14150) 3 : cephadm [INF] [09/Mar/2026:14:16:59] ENGINE Serving on https://192.168.123.103:7150 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:16:59.834905+0000 mgr.x (mgr.14150) 3 : cephadm [INF] [09/Mar/2026:14:16:59] ENGINE Serving on https://192.168.123.103:7150 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:16:59.834994+0000 mgr.x (mgr.14150) 4 : cephadm [INF] [09/Mar/2026:14:16:59] ENGINE Bus STARTED 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:16:59.834994+0000 mgr.x (mgr.14150) 4 : cephadm [INF] [09/Mar/2026:14:16:59] ENGINE Bus STARTED 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:16:59.835554+0000 mgr.x (mgr.14150) 5 : cephadm [INF] [09/Mar/2026:14:16:59] ENGINE Client ('192.168.123.103', 53372) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:16:59.835554+0000 mgr.x (mgr.14150) 5 : cephadm [INF] [09/Mar/2026:14:16:59] ENGINE Client ('192.168.123.103', 53372) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:59.903221+0000 mgr.x (mgr.14150) 6 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:59.903221+0000 mgr.x (mgr.14150) 6 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:59.907650+0000 mgr.x (mgr.14150) 7 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:16:59.907650+0000 mgr.x (mgr.14150) 7 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:00.188002+0000 mgr.x (mgr.14150) 8 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:00.188002+0000 mgr.x (mgr.14150) 8 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:00.227959+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:00.227959+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:00.230419+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:00.230419+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:00.702287+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:00.702287+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:00.967806+0000 mon.a (mon.0) 89 : audit [DBG] from='client.? 192.168.123.103:0/693571450' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:00.967806+0000 mon.a (mon.0) 89 : audit [DBG] from='client.? 192.168.123.103:0/693571450' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:00.549855+0000 mgr.x (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:00.549855+0000 mgr.x (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:01.316344+0000 mon.a (mon.0) 90 : audit [INF] from='client.? 192.168.123.103:0/381537853' entity='client.admin' 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:01.316344+0000 mon.a (mon.0) 90 : audit [INF] from='client.? 192.168.123.103:0/381537853' entity='client.admin' 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:17:01.706636+0000 mon.a (mon.0) 91 : cluster [DBG] mgrmap e11: x(active, since 2s) 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:17:01.706636+0000 mon.a (mon.0) 91 : cluster [DBG] mgrmap e11: x(active, since 2s) 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:03.450854+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:03.450854+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:04.093514+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:04.093514+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:17:05.457328+0000 mon.a (mon.0) 94 : cluster [DBG] mgrmap e12: x(active, since 6s) 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:17:05.457328+0000 mon.a (mon.0) 94 : cluster [DBG] mgrmap e12: x(active, since 6s) 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:05.581214+0000 mon.a (mon.0) 95 : audit [INF] from='client.? 192.168.123.103:0/2441871205' entity='client.admin' 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:05.581214+0000 mon.a (mon.0) 95 : audit [INF] from='client.? 192.168.123.103:0/2441871205' entity='client.admin' 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:09.883300+0000 mon.a (mon.0) 96 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:09.883300+0000 mon.a (mon.0) 96 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:09.886383+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:09.886383+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:09.887180+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:09.887180+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:09.890723+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:09.890723+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:09.898420+0000 mon.a (mon.0) 100 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:09.898420+0000 mon.a (mon.0) 100 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:09.902060+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:09.902060+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:10.644084+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:10.644084+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:10.645054+0000 mon.a (mon.0) 103 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:10.645054+0000 mon.a (mon.0) 103 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:10.646058+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:10.646058+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:10.646470+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:10.646470+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:10.795849+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:10.795849+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.525 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:10.798725+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:10.798725+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:10.801120+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:10.801120+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:10.640941+0000 mgr.x (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:10.640941+0000 mgr.x (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:17:10.647048+0000 mgr.x (mgr.14150) 11 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:17:10.647048+0000 mgr.x (mgr.14150) 11 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:17:10.682973+0000 mgr.x (mgr.14150) 12 : cephadm [INF] Updating vm03:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.conf 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:17:10.682973+0000 mgr.x (mgr.14150) 12 : cephadm [INF] Updating vm03:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.conf 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:17:10.727278+0000 mgr.x (mgr.14150) 13 : cephadm [INF] Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:17:10.727278+0000 mgr.x (mgr.14150) 13 : cephadm [INF] Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:17:10.760250+0000 mgr.x (mgr.14150) 14 : cephadm [INF] Updating vm03:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.client.admin.keyring 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:17:10.760250+0000 mgr.x (mgr.14150) 14 : cephadm [INF] Updating vm03:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.client.admin.keyring 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:15.671986+0000 mgr.x (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm04", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:15.671986+0000 mgr.x (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm04", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:17:16.230711+0000 mgr.x (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm04 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:17:16.230711+0000 mgr.x (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm04 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:17.525535+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:17.525535+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:17:17.526385+0000 mgr.x (mgr.14150) 17 : cephadm [INF] Added host vm04 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:17:17.526385+0000 mgr.x (mgr.14150) 17 : cephadm [INF] Added host vm04 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:17.526727+0000 mon.a (mon.0) 110 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:17.526727+0000 mon.a (mon.0) 110 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:17.831556+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:17.831556+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:17:18.908056+0000 mgr.x (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:17:18.908056+0000 mgr.x (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:19.143733+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:19.143733+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:19.757481+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:19.757481+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:17:20.908223+0000 mgr.x (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:17:20.908223+0000 mgr.x (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:22.453389+0000 mgr.x (mgr.14150) 20 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:22.453389+0000 mgr.x (mgr.14150) 20 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:22.623678+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:22.623678+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:22.625751+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:22.625751+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:22.628240+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:22.628240+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:22.630020+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:22.630020+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:22.630573+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:22.630573+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:22.631224+0000 mon.a (mon.0) 119 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:22.631224+0000 mon.a (mon.0) 119 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:22.631667+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:22.631667+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:22.767717+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:22.767717+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:22.770949+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:22.770949+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:22.773819+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:22.773819+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:17:22.632237+0000 mgr.x (mgr.14150) 21 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:17:22.632237+0000 mgr.x (mgr.14150) 21 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:17:22.666218+0000 mgr.x (mgr.14150) 22 : cephadm [INF] Updating vm04:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.conf 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:17:22.666218+0000 mgr.x (mgr.14150) 22 : cephadm [INF] Updating vm04:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.conf 2026-03-09T14:17:40.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:17:22.694870+0000 mgr.x (mgr.14150) 23 : cephadm [INF] Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:17:22.694870+0000 mgr.x (mgr.14150) 23 : cephadm [INF] Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:17:22.729121+0000 mgr.x (mgr.14150) 24 : cephadm [INF] Updating vm04:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.client.admin.keyring 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:17:22.729121+0000 mgr.x (mgr.14150) 24 : cephadm [INF] Updating vm04:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.client.admin.keyring 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:17:22.908490+0000 mgr.x (mgr.14150) 25 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:17:22.908490+0000 mgr.x (mgr.14150) 25 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:17:24.908701+0000 mgr.x (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:17:24.908701+0000 mgr.x (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:26.462414+0000 mgr.x (mgr.14150) 27 : audit [DBG] from='client.14178 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm05", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:26.462414+0000 mgr.x (mgr.14150) 27 : audit [DBG] from='client.14178 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm05", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:17:26.908859+0000 mgr.x (mgr.14150) 28 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:17:26.908859+0000 mgr.x (mgr.14150) 28 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:17:26.974228+0000 mgr.x (mgr.14150) 29 : cephadm [INF] Deploying cephadm binary to vm05 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:17:26.974228+0000 mgr.x (mgr.14150) 29 : cephadm [INF] Deploying cephadm binary to vm05 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:28.205516+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:28.205516+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:17:28.205811+0000 mgr.x (mgr.14150) 30 : cephadm [INF] Added host vm05 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:17:28.205811+0000 mgr.x (mgr.14150) 30 : cephadm [INF] Added host vm05 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:28.205991+0000 mon.a (mon.0) 125 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:28.205991+0000 mon.a (mon.0) 125 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:28.515106+0000 mon.a (mon.0) 126 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:28.515106+0000 mon.a (mon.0) 126 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:17:28.909016+0000 mgr.x (mgr.14150) 31 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:17:28.909016+0000 mgr.x (mgr.14150) 31 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:29.789822+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:29.789822+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:30.391186+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:30.391186+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:17:30.909170+0000 mgr.x (mgr.14150) 32 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:17:30.909170+0000 mgr.x (mgr.14150) 32 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:17:32.909332+0000 mgr.x (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:17:32.909332+0000 mgr.x (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:33.109243+0000 mgr.x (mgr.14150) 34 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:33.109243+0000 mgr.x (mgr.14150) 34 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:33.178191+0000 mon.a (mon.0) 129 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:33.178191+0000 mon.a (mon.0) 129 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:33.180616+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:33.180616+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:33.184035+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:33.184035+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:33.186492+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:33.186492+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:33.187067+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:33.187067+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:33.187702+0000 mon.a (mon.0) 134 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:33.187702+0000 mon.a (mon.0) 134 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:33.188091+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:33.188091+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:17:33.188632+0000 mgr.x (mgr.14150) 35 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:17:33.188632+0000 mgr.x (mgr.14150) 35 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:17:33.218809+0000 mgr.x (mgr.14150) 36 : cephadm [INF] Updating vm05:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.conf 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:17:33.218809+0000 mgr.x (mgr.14150) 36 : cephadm [INF] Updating vm05:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.conf 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:17:33.246676+0000 mgr.x (mgr.14150) 37 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:17:33.246676+0000 mgr.x (mgr.14150) 37 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:17:33.280500+0000 mgr.x (mgr.14150) 38 : cephadm [INF] Updating vm05:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.client.admin.keyring 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:17:33.280500+0000 mgr.x (mgr.14150) 38 : cephadm [INF] Updating vm05:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.client.admin.keyring 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:33.312508+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:33.312508+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:33.314376+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:33.314376+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:33.316275+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:33.316275+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:17:34.909481+0000 mgr.x (mgr.14150) 39 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:17:34.909481+0000 mgr.x (mgr.14150) 39 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:40.527 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:37.130449+0000 mon.a (mon.0) 139 : audit [INF] from='client.? 192.168.123.103:0/807970424' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-09T14:17:40.528 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:37.130449+0000 mon.a (mon.0) 139 : audit [INF] from='client.? 192.168.123.103:0/807970424' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-09T14:17:40.528 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:17:36.909657+0000 mgr.x (mgr.14150) 40 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:40.528 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:17:36.909657+0000 mgr.x (mgr.14150) 40 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:40.528 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:37.186313+0000 mon.a (mon.0) 140 : audit [INF] from='client.? 192.168.123.103:0/807970424' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-09T14:17:40.528 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:37.186313+0000 mon.a (mon.0) 140 : audit [INF] from='client.? 192.168.123.103:0/807970424' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-09T14:17:40.528 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:17:37.187398+0000 mon.a (mon.0) 141 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T14:17:40.528 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:17:37.187398+0000 mon.a (mon.0) 141 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T14:17:40.528 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:38.632013+0000 mgr.x (mgr.14150) 41 : audit [DBG] from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm03:192.168.123.103=a;vm04:192.168.123.104=b;vm05:192.168.123.105=c", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:17:40.528 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:38.632013+0000 mgr.x (mgr.14150) 41 : audit [DBG] from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm03:192.168.123.103=a;vm04:192.168.123.104=b;vm05:192.168.123.105=c", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:17:40.528 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:17:38.633069+0000 mgr.x (mgr.14150) 42 : cephadm [INF] Saving service mon spec with placement vm03:192.168.123.103=a;vm04:192.168.123.104=b;vm05:192.168.123.105=c;count:3 2026-03-09T14:17:40.528 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:17:38.633069+0000 mgr.x (mgr.14150) 42 : cephadm [INF] Saving service mon spec with placement vm03:192.168.123.103=a;vm04:192.168.123.104=b;vm05:192.168.123.105=c;count:3 2026-03-09T14:17:40.528 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:38.635887+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.528 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:38.635887+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.528 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:38.636581+0000 mon.a (mon.0) 143 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:17:40.528 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:38.636581+0000 mon.a (mon.0) 143 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:17:40.528 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:38.637412+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:40.528 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:38.637412+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:40.528 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:38.637779+0000 mon.a (mon.0) 145 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:17:40.528 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:38.637779+0000 mon.a (mon.0) 145 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:17:40.528 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:38.640204+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.528 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:38.640204+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:40.528 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:38.641303+0000 mon.a (mon.0) 147 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:17:40.528 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:38.641303+0000 mon.a (mon.0) 147 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:17:40.528 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:38.641647+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:40.528 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: audit 2026-03-09T14:17:38.641647+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:40.528 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:17:38.642107+0000 mgr.x (mgr.14150) 43 : cephadm [INF] Deploying daemon mon.c on vm05 2026-03-09T14:17:40.528 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cephadm 2026-03-09T14:17:38.642107+0000 mgr.x (mgr.14150) 43 : cephadm [INF] Deploying daemon mon.c on vm05 2026-03-09T14:17:40.528 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:17:38.909816+0000 mgr.x (mgr.14150) 44 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:40.528 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: cluster 2026-03-09T14:17:38.909816+0000 mgr.x (mgr.14150) 44 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:40.528 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:40 vm05 bash[20070]: debug 2026-03-09T14:17:40.366+0000 7f323e29d640 1 mon.c@-1(synchronizing).paxosservice(auth 1..3) refresh upgraded, format 0 -> 3 2026-03-09T14:17:42.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:41 vm04 bash[19581]: debug 2026-03-09T14:17:41.766+0000 7feeaaf83640 1 mon.b@-1(synchronizing).paxosservice(auth 1..3) refresh upgraded, format 0 -> 3 2026-03-09T14:17:45.397 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:17:45.397 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":2,"fsid":"3346de4a-1bc2-11f1-95ae-3796c8433614","modified":"2026-03-09T14:17:40.379561Z","created":"2026-03-09T14:16:36.936076Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:3300","nonce":0},{"type":"v1","addr":"192.168.123.103:6789","nonce":0}]},"addr":"192.168.123.103:6789/0","public_addr":"192.168.123.103:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-09T14:17:45.397 INFO:teuthology.orchestra.run.vm05.stderr:dumped monmap epoch 2 2026-03-09T14:17:45.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: cephadm 2026-03-09T14:17:40.151139+0000 mgr.x (mgr.14150) 45 : cephadm [INF] Deploying daemon mon.b on vm04 2026-03-09T14:17:45.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: cephadm 2026-03-09T14:17:40.151139+0000 mgr.x (mgr.14150) 45 : cephadm [INF] Deploying daemon mon.b on vm04 2026-03-09T14:17:45.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: cluster 2026-03-09T14:17:40.382122+0000 mon.a (mon.0) 155 : cluster [INF] mon.a calling monitor election 2026-03-09T14:17:45.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: cluster 2026-03-09T14:17:40.382122+0000 mon.a (mon.0) 155 : cluster [INF] mon.a calling monitor election 2026-03-09T14:17:45.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: audit 2026-03-09T14:17:40.383582+0000 mon.a (mon.0) 156 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: audit 2026-03-09T14:17:40.383582+0000 mon.a (mon.0) 156 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: audit 2026-03-09T14:17:40.383700+0000 mon.a (mon.0) 157 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: audit 2026-03-09T14:17:40.383700+0000 mon.a (mon.0) 157 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: audit 2026-03-09T14:17:40.390403+0000 mon.a (mon.0) 158 : audit [DBG] from='client.? 192.168.123.105:0/3517087937' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: audit 2026-03-09T14:17:40.390403+0000 mon.a (mon.0) 158 : audit [DBG] from='client.? 192.168.123.105:0/3517087937' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: cluster 2026-03-09T14:17:40.909977+0000 mgr.x (mgr.14150) 46 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: cluster 2026-03-09T14:17:40.909977+0000 mgr.x (mgr.14150) 46 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: audit 2026-03-09T14:17:41.378729+0000 mon.a (mon.0) 159 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: audit 2026-03-09T14:17:41.378729+0000 mon.a (mon.0) 159 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: audit 2026-03-09T14:17:41.776349+0000 mon.a (mon.0) 160 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: audit 2026-03-09T14:17:41.776349+0000 mon.a (mon.0) 160 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: audit 2026-03-09T14:17:42.378972+0000 mon.a (mon.0) 161 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: audit 2026-03-09T14:17:42.378972+0000 mon.a (mon.0) 161 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: cluster 2026-03-09T14:17:42.379592+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: cluster 2026-03-09T14:17:42.379592+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: audit 2026-03-09T14:17:42.776213+0000 mon.a (mon.0) 162 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: audit 2026-03-09T14:17:42.776213+0000 mon.a (mon.0) 162 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: cluster 2026-03-09T14:17:42.910135+0000 mgr.x (mgr.14150) 47 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: cluster 2026-03-09T14:17:42.910135+0000 mgr.x (mgr.14150) 47 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: audit 2026-03-09T14:17:43.379185+0000 mon.a (mon.0) 163 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: audit 2026-03-09T14:17:43.379185+0000 mon.a (mon.0) 163 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: audit 2026-03-09T14:17:43.776255+0000 mon.a (mon.0) 164 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: audit 2026-03-09T14:17:43.776255+0000 mon.a (mon.0) 164 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: audit 2026-03-09T14:17:44.379003+0000 mon.a (mon.0) 165 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: audit 2026-03-09T14:17:44.379003+0000 mon.a (mon.0) 165 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: audit 2026-03-09T14:17:44.776601+0000 mon.a (mon.0) 166 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: audit 2026-03-09T14:17:44.776601+0000 mon.a (mon.0) 166 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: audit 2026-03-09T14:17:45.379316+0000 mon.a (mon.0) 167 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: audit 2026-03-09T14:17:45.379316+0000 mon.a (mon.0) 167 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: cluster 2026-03-09T14:17:45.387803+0000 mon.a (mon.0) 168 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: cluster 2026-03-09T14:17:45.387803+0000 mon.a (mon.0) 168 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: cluster 2026-03-09T14:17:45.392190+0000 mon.a (mon.0) 169 : cluster [DBG] monmap epoch 2 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: cluster 2026-03-09T14:17:45.392190+0000 mon.a (mon.0) 169 : cluster [DBG] monmap epoch 2 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: cluster 2026-03-09T14:17:45.392235+0000 mon.a (mon.0) 170 : cluster [DBG] fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: cluster 2026-03-09T14:17:45.392235+0000 mon.a (mon.0) 170 : cluster [DBG] fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: cluster 2026-03-09T14:17:45.392276+0000 mon.a (mon.0) 171 : cluster [DBG] last_changed 2026-03-09T14:17:40.379561+0000 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: cluster 2026-03-09T14:17:45.392276+0000 mon.a (mon.0) 171 : cluster [DBG] last_changed 2026-03-09T14:17:40.379561+0000 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: cluster 2026-03-09T14:17:45.392316+0000 mon.a (mon.0) 172 : cluster [DBG] created 2026-03-09T14:16:36.936076+0000 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: cluster 2026-03-09T14:17:45.392316+0000 mon.a (mon.0) 172 : cluster [DBG] created 2026-03-09T14:16:36.936076+0000 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: cluster 2026-03-09T14:17:45.392356+0000 mon.a (mon.0) 173 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: cluster 2026-03-09T14:17:45.392356+0000 mon.a (mon.0) 173 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: cluster 2026-03-09T14:17:45.392397+0000 mon.a (mon.0) 174 : cluster [DBG] election_strategy: 1 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: cluster 2026-03-09T14:17:45.392397+0000 mon.a (mon.0) 174 : cluster [DBG] election_strategy: 1 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: cluster 2026-03-09T14:17:45.392437+0000 mon.a (mon.0) 175 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: cluster 2026-03-09T14:17:45.392437+0000 mon.a (mon.0) 175 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: cluster 2026-03-09T14:17:45.392478+0000 mon.a (mon.0) 176 : cluster [DBG] 1: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.c 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: cluster 2026-03-09T14:17:45.392478+0000 mon.a (mon.0) 176 : cluster [DBG] 1: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.c 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: cluster 2026-03-09T14:17:45.392848+0000 mon.a (mon.0) 177 : cluster [DBG] fsmap 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: cluster 2026-03-09T14:17:45.392848+0000 mon.a (mon.0) 177 : cluster [DBG] fsmap 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: cluster 2026-03-09T14:17:45.392933+0000 mon.a (mon.0) 178 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: cluster 2026-03-09T14:17:45.392933+0000 mon.a (mon.0) 178 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: cluster 2026-03-09T14:17:45.393165+0000 mon.a (mon.0) 179 : cluster [DBG] mgrmap e12: x(active, since 46s) 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: cluster 2026-03-09T14:17:45.393165+0000 mon.a (mon.0) 179 : cluster [DBG] mgrmap e12: x(active, since 46s) 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: cluster 2026-03-09T14:17:45.393456+0000 mon.a (mon.0) 180 : cluster [INF] overall HEALTH_OK 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: cluster 2026-03-09T14:17:45.393456+0000 mon.a (mon.0) 180 : cluster [INF] overall HEALTH_OK 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: audit 2026-03-09T14:17:45.399933+0000 mon.a (mon.0) 181 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: audit 2026-03-09T14:17:45.399933+0000 mon.a (mon.0) 181 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:45.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: audit 2026-03-09T14:17:45.403794+0000 mon.a (mon.0) 182 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:45.765 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: audit 2026-03-09T14:17:45.403794+0000 mon.a (mon.0) 182 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:45.765 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: audit 2026-03-09T14:17:45.411056+0000 mon.a (mon.0) 183 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:45.765 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: audit 2026-03-09T14:17:45.411056+0000 mon.a (mon.0) 183 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:45.765 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: audit 2026-03-09T14:17:45.416104+0000 mon.a (mon.0) 184 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:45.765 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: audit 2026-03-09T14:17:45.416104+0000 mon.a (mon.0) 184 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:45.765 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: audit 2026-03-09T14:17:45.431399+0000 mon.a (mon.0) 185 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:17:45.765 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:45 vm05 bash[20070]: audit 2026-03-09T14:17:45.431399+0000 mon.a (mon.0) 185 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: cephadm 2026-03-09T14:17:40.151139+0000 mgr.x (mgr.14150) 45 : cephadm [INF] Deploying daemon mon.b on vm04 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: cephadm 2026-03-09T14:17:40.151139+0000 mgr.x (mgr.14150) 45 : cephadm [INF] Deploying daemon mon.b on vm04 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: cluster 2026-03-09T14:17:40.382122+0000 mon.a (mon.0) 155 : cluster [INF] mon.a calling monitor election 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: cluster 2026-03-09T14:17:40.382122+0000 mon.a (mon.0) 155 : cluster [INF] mon.a calling monitor election 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: audit 2026-03-09T14:17:40.383582+0000 mon.a (mon.0) 156 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: audit 2026-03-09T14:17:40.383582+0000 mon.a (mon.0) 156 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: audit 2026-03-09T14:17:40.383700+0000 mon.a (mon.0) 157 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: audit 2026-03-09T14:17:40.383700+0000 mon.a (mon.0) 157 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: audit 2026-03-09T14:17:40.390403+0000 mon.a (mon.0) 158 : audit [DBG] from='client.? 192.168.123.105:0/3517087937' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: audit 2026-03-09T14:17:40.390403+0000 mon.a (mon.0) 158 : audit [DBG] from='client.? 192.168.123.105:0/3517087937' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: cluster 2026-03-09T14:17:40.909977+0000 mgr.x (mgr.14150) 46 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: cluster 2026-03-09T14:17:40.909977+0000 mgr.x (mgr.14150) 46 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: audit 2026-03-09T14:17:41.378729+0000 mon.a (mon.0) 159 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: audit 2026-03-09T14:17:41.378729+0000 mon.a (mon.0) 159 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: audit 2026-03-09T14:17:41.776349+0000 mon.a (mon.0) 160 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: audit 2026-03-09T14:17:41.776349+0000 mon.a (mon.0) 160 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: audit 2026-03-09T14:17:42.378972+0000 mon.a (mon.0) 161 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: audit 2026-03-09T14:17:42.378972+0000 mon.a (mon.0) 161 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: cluster 2026-03-09T14:17:42.379592+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: cluster 2026-03-09T14:17:42.379592+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: audit 2026-03-09T14:17:42.776213+0000 mon.a (mon.0) 162 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: audit 2026-03-09T14:17:42.776213+0000 mon.a (mon.0) 162 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: cluster 2026-03-09T14:17:42.910135+0000 mgr.x (mgr.14150) 47 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: cluster 2026-03-09T14:17:42.910135+0000 mgr.x (mgr.14150) 47 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: audit 2026-03-09T14:17:43.379185+0000 mon.a (mon.0) 163 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: audit 2026-03-09T14:17:43.379185+0000 mon.a (mon.0) 163 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: audit 2026-03-09T14:17:43.776255+0000 mon.a (mon.0) 164 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: audit 2026-03-09T14:17:43.776255+0000 mon.a (mon.0) 164 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: audit 2026-03-09T14:17:44.379003+0000 mon.a (mon.0) 165 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: audit 2026-03-09T14:17:44.379003+0000 mon.a (mon.0) 165 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: audit 2026-03-09T14:17:44.776601+0000 mon.a (mon.0) 166 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: audit 2026-03-09T14:17:44.776601+0000 mon.a (mon.0) 166 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: audit 2026-03-09T14:17:45.379316+0000 mon.a (mon.0) 167 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: audit 2026-03-09T14:17:45.379316+0000 mon.a (mon.0) 167 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: cluster 2026-03-09T14:17:45.387803+0000 mon.a (mon.0) 168 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: cluster 2026-03-09T14:17:45.387803+0000 mon.a (mon.0) 168 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: cluster 2026-03-09T14:17:45.392190+0000 mon.a (mon.0) 169 : cluster [DBG] monmap epoch 2 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: cluster 2026-03-09T14:17:45.392190+0000 mon.a (mon.0) 169 : cluster [DBG] monmap epoch 2 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: cluster 2026-03-09T14:17:45.392235+0000 mon.a (mon.0) 170 : cluster [DBG] fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: cluster 2026-03-09T14:17:45.392235+0000 mon.a (mon.0) 170 : cluster [DBG] fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: cluster 2026-03-09T14:17:45.392276+0000 mon.a (mon.0) 171 : cluster [DBG] last_changed 2026-03-09T14:17:40.379561+0000 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: cluster 2026-03-09T14:17:45.392276+0000 mon.a (mon.0) 171 : cluster [DBG] last_changed 2026-03-09T14:17:40.379561+0000 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: cluster 2026-03-09T14:17:45.392316+0000 mon.a (mon.0) 172 : cluster [DBG] created 2026-03-09T14:16:36.936076+0000 2026-03-09T14:17:45.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: cluster 2026-03-09T14:17:45.392316+0000 mon.a (mon.0) 172 : cluster [DBG] created 2026-03-09T14:16:36.936076+0000 2026-03-09T14:17:45.808 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: cluster 2026-03-09T14:17:45.392356+0000 mon.a (mon.0) 173 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T14:17:45.808 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: cluster 2026-03-09T14:17:45.392356+0000 mon.a (mon.0) 173 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T14:17:45.808 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: cluster 2026-03-09T14:17:45.392397+0000 mon.a (mon.0) 174 : cluster [DBG] election_strategy: 1 2026-03-09T14:17:45.808 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: cluster 2026-03-09T14:17:45.392397+0000 mon.a (mon.0) 174 : cluster [DBG] election_strategy: 1 2026-03-09T14:17:45.808 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: cluster 2026-03-09T14:17:45.392437+0000 mon.a (mon.0) 175 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T14:17:45.808 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: cluster 2026-03-09T14:17:45.392437+0000 mon.a (mon.0) 175 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T14:17:45.808 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: cluster 2026-03-09T14:17:45.392478+0000 mon.a (mon.0) 176 : cluster [DBG] 1: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.c 2026-03-09T14:17:45.808 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: cluster 2026-03-09T14:17:45.392478+0000 mon.a (mon.0) 176 : cluster [DBG] 1: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.c 2026-03-09T14:17:45.808 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: cluster 2026-03-09T14:17:45.392848+0000 mon.a (mon.0) 177 : cluster [DBG] fsmap 2026-03-09T14:17:45.808 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: cluster 2026-03-09T14:17:45.392848+0000 mon.a (mon.0) 177 : cluster [DBG] fsmap 2026-03-09T14:17:45.808 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: cluster 2026-03-09T14:17:45.392933+0000 mon.a (mon.0) 178 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T14:17:45.808 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: cluster 2026-03-09T14:17:45.392933+0000 mon.a (mon.0) 178 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T14:17:45.808 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: cluster 2026-03-09T14:17:45.393165+0000 mon.a (mon.0) 179 : cluster [DBG] mgrmap e12: x(active, since 46s) 2026-03-09T14:17:45.808 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: cluster 2026-03-09T14:17:45.393165+0000 mon.a (mon.0) 179 : cluster [DBG] mgrmap e12: x(active, since 46s) 2026-03-09T14:17:45.808 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: cluster 2026-03-09T14:17:45.393456+0000 mon.a (mon.0) 180 : cluster [INF] overall HEALTH_OK 2026-03-09T14:17:45.808 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: cluster 2026-03-09T14:17:45.393456+0000 mon.a (mon.0) 180 : cluster [INF] overall HEALTH_OK 2026-03-09T14:17:45.808 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: audit 2026-03-09T14:17:45.399933+0000 mon.a (mon.0) 181 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:45.808 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: audit 2026-03-09T14:17:45.399933+0000 mon.a (mon.0) 181 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:45.808 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: audit 2026-03-09T14:17:45.403794+0000 mon.a (mon.0) 182 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:45.808 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: audit 2026-03-09T14:17:45.403794+0000 mon.a (mon.0) 182 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:45.808 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: audit 2026-03-09T14:17:45.411056+0000 mon.a (mon.0) 183 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:45.808 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: audit 2026-03-09T14:17:45.411056+0000 mon.a (mon.0) 183 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:45.808 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: audit 2026-03-09T14:17:45.416104+0000 mon.a (mon.0) 184 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:45.808 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: audit 2026-03-09T14:17:45.416104+0000 mon.a (mon.0) 184 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:45.808 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: audit 2026-03-09T14:17:45.431399+0000 mon.a (mon.0) 185 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:17:45.808 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:45 vm03 bash[17524]: audit 2026-03-09T14:17:45.431399+0000 mon.a (mon.0) 185 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:17:46.477 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-09T14:17:46.478 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph mon dump -f json 2026-03-09T14:17:46.806 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:17:46 vm03 bash[17796]: debug 2026-03-09T14:17:46.375+0000 7fbc035c9640 -1 mgr.server handle_report got status from non-daemon mon.c 2026-03-09T14:17:50.208 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.c/config 2026-03-09T14:17:51.164 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:17:51.164 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":3,"fsid":"3346de4a-1bc2-11f1-95ae-3796c8433614","modified":"2026-03-09T14:17:45.777986Z","created":"2026-03-09T14:16:36.936076Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:3300","nonce":0},{"type":"v1","addr":"192.168.123.103:6789","nonce":0}]},"addr":"192.168.123.103:6789/0","public_addr":"192.168.123.103:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:3300","nonce":0},{"type":"v1","addr":"192.168.123.104:6789","nonce":0}]},"addr":"192.168.123.104:6789/0","public_addr":"192.168.123.104:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-09T14:17:51.164 INFO:teuthology.orchestra.run.vm05.stderr:dumped monmap epoch 3 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: audit 2026-03-09T14:17:45.782521+0000 mon.a (mon.0) 187 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: audit 2026-03-09T14:17:45.782521+0000 mon.a (mon.0) 187 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: cluster 2026-03-09T14:17:45.782704+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: cluster 2026-03-09T14:17:45.782704+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: audit 2026-03-09T14:17:45.782763+0000 mon.a (mon.0) 188 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: audit 2026-03-09T14:17:45.782763+0000 mon.a (mon.0) 188 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: cluster 2026-03-09T14:17:45.782956+0000 mon.a (mon.0) 189 : cluster [INF] mon.a calling monitor election 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: cluster 2026-03-09T14:17:45.782956+0000 mon.a (mon.0) 189 : cluster [INF] mon.a calling monitor election 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: audit 2026-03-09T14:17:45.784057+0000 mon.a (mon.0) 190 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: audit 2026-03-09T14:17:45.784057+0000 mon.a (mon.0) 190 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: audit 2026-03-09T14:17:46.776723+0000 mon.a (mon.0) 191 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: audit 2026-03-09T14:17:46.776723+0000 mon.a (mon.0) 191 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: cluster 2026-03-09T14:17:46.910466+0000 mgr.x (mgr.14150) 49 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: cluster 2026-03-09T14:17:46.910466+0000 mgr.x (mgr.14150) 49 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: audit 2026-03-09T14:17:47.776609+0000 mon.a (mon.0) 192 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: audit 2026-03-09T14:17:47.776609+0000 mon.a (mon.0) 192 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: audit 2026-03-09T14:17:48.776819+0000 mon.a (mon.0) 193 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: audit 2026-03-09T14:17:48.776819+0000 mon.a (mon.0) 193 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: cluster 2026-03-09T14:17:48.910635+0000 mgr.x (mgr.14150) 50 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: cluster 2026-03-09T14:17:48.910635+0000 mgr.x (mgr.14150) 50 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: audit 2026-03-09T14:17:49.776986+0000 mon.a (mon.0) 194 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: audit 2026-03-09T14:17:49.776986+0000 mon.a (mon.0) 194 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: audit 2026-03-09T14:17:50.776826+0000 mon.a (mon.0) 195 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: audit 2026-03-09T14:17:50.776826+0000 mon.a (mon.0) 195 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: cluster 2026-03-09T14:17:50.784859+0000 mon.a (mon.0) 196 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: cluster 2026-03-09T14:17:50.784859+0000 mon.a (mon.0) 196 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: cluster 2026-03-09T14:17:50.787771+0000 mon.a (mon.0) 197 : cluster [DBG] monmap epoch 3 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: cluster 2026-03-09T14:17:50.787771+0000 mon.a (mon.0) 197 : cluster [DBG] monmap epoch 3 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: cluster 2026-03-09T14:17:50.787777+0000 mon.a (mon.0) 198 : cluster [DBG] fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: cluster 2026-03-09T14:17:50.787777+0000 mon.a (mon.0) 198 : cluster [DBG] fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: cluster 2026-03-09T14:17:50.787780+0000 mon.a (mon.0) 199 : cluster [DBG] last_changed 2026-03-09T14:17:45.777986+0000 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: cluster 2026-03-09T14:17:50.787780+0000 mon.a (mon.0) 199 : cluster [DBG] last_changed 2026-03-09T14:17:45.777986+0000 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: cluster 2026-03-09T14:17:50.787784+0000 mon.a (mon.0) 200 : cluster [DBG] created 2026-03-09T14:16:36.936076+0000 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: cluster 2026-03-09T14:17:50.787784+0000 mon.a (mon.0) 200 : cluster [DBG] created 2026-03-09T14:16:36.936076+0000 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: cluster 2026-03-09T14:17:50.787787+0000 mon.a (mon.0) 201 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: cluster 2026-03-09T14:17:50.787787+0000 mon.a (mon.0) 201 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: cluster 2026-03-09T14:17:50.787791+0000 mon.a (mon.0) 202 : cluster [DBG] election_strategy: 1 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: cluster 2026-03-09T14:17:50.787791+0000 mon.a (mon.0) 202 : cluster [DBG] election_strategy: 1 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: cluster 2026-03-09T14:17:50.787793+0000 mon.a (mon.0) 203 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: cluster 2026-03-09T14:17:50.787793+0000 mon.a (mon.0) 203 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: cluster 2026-03-09T14:17:50.787797+0000 mon.a (mon.0) 204 : cluster [DBG] 1: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.c 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: cluster 2026-03-09T14:17:50.787797+0000 mon.a (mon.0) 204 : cluster [DBG] 1: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.c 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: cluster 2026-03-09T14:17:50.787799+0000 mon.a (mon.0) 205 : cluster [DBG] 2: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: cluster 2026-03-09T14:17:50.787799+0000 mon.a (mon.0) 205 : cluster [DBG] 2: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: cluster 2026-03-09T14:17:50.788135+0000 mon.a (mon.0) 206 : cluster [DBG] fsmap 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: cluster 2026-03-09T14:17:50.788135+0000 mon.a (mon.0) 206 : cluster [DBG] fsmap 2026-03-09T14:17:51.175 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: cluster 2026-03-09T14:17:50.788147+0000 mon.a (mon.0) 207 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T14:17:51.176 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: cluster 2026-03-09T14:17:50.788147+0000 mon.a (mon.0) 207 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T14:17:51.176 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: cluster 2026-03-09T14:17:50.788264+0000 mon.a (mon.0) 208 : cluster [DBG] mgrmap e12: x(active, since 51s) 2026-03-09T14:17:51.176 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: cluster 2026-03-09T14:17:50.788264+0000 mon.a (mon.0) 208 : cluster [DBG] mgrmap e12: x(active, since 51s) 2026-03-09T14:17:51.176 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: cluster 2026-03-09T14:17:50.788356+0000 mon.a (mon.0) 209 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN) 2026-03-09T14:17:51.176 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: cluster 2026-03-09T14:17:50.788356+0000 mon.a (mon.0) 209 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN) 2026-03-09T14:17:51.176 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: cluster 2026-03-09T14:17:50.791042+0000 mon.a (mon.0) 210 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,c 2026-03-09T14:17:51.176 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: cluster 2026-03-09T14:17:50.791042+0000 mon.a (mon.0) 210 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,c 2026-03-09T14:17:51.176 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: cluster 2026-03-09T14:17:50.791062+0000 mon.a (mon.0) 211 : cluster [WRN] [WRN] MON_DOWN: 1/3 mons down, quorum a,c 2026-03-09T14:17:51.176 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: cluster 2026-03-09T14:17:50.791062+0000 mon.a (mon.0) 211 : cluster [WRN] [WRN] MON_DOWN: 1/3 mons down, quorum a,c 2026-03-09T14:17:51.176 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: cluster 2026-03-09T14:17:50.791077+0000 mon.a (mon.0) 212 : cluster [WRN] mon.b (rank 2) addr [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] is down (out of quorum) 2026-03-09T14:17:51.176 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: cluster 2026-03-09T14:17:50.791077+0000 mon.a (mon.0) 212 : cluster [WRN] mon.b (rank 2) addr [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] is down (out of quorum) 2026-03-09T14:17:51.176 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: audit 2026-03-09T14:17:50.793267+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:51.176 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: audit 2026-03-09T14:17:50.793267+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:51.176 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: audit 2026-03-09T14:17:50.796210+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:51.176 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: audit 2026-03-09T14:17:50.796210+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:51.176 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: audit 2026-03-09T14:17:50.799389+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:51.176 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: audit 2026-03-09T14:17:50.799389+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:51.176 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: audit 2026-03-09T14:17:50.802098+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:51.176 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: audit 2026-03-09T14:17:50.802098+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:51.176 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: audit 2026-03-09T14:17:50.804721+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:51.176 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: audit 2026-03-09T14:17:50.804721+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:51.176 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: audit 2026-03-09T14:17:50.805412+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:51.176 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: audit 2026-03-09T14:17:50.805412+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:51.176 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: audit 2026-03-09T14:17:50.805966+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:17:51.176 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:50 vm05 bash[20070]: audit 2026-03-09T14:17:50.805966+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:17:51.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: audit 2026-03-09T14:17:45.782521+0000 mon.a (mon.0) 187 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:17:51.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: audit 2026-03-09T14:17:45.782521+0000 mon.a (mon.0) 187 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:17:51.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: cluster 2026-03-09T14:17:45.782704+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-09T14:17:51.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: cluster 2026-03-09T14:17:45.782704+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-09T14:17:51.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: audit 2026-03-09T14:17:45.782763+0000 mon.a (mon.0) 188 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:51.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: audit 2026-03-09T14:17:45.782763+0000 mon.a (mon.0) 188 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:51.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: cluster 2026-03-09T14:17:45.782956+0000 mon.a (mon.0) 189 : cluster [INF] mon.a calling monitor election 2026-03-09T14:17:51.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: cluster 2026-03-09T14:17:45.782956+0000 mon.a (mon.0) 189 : cluster [INF] mon.a calling monitor election 2026-03-09T14:17:51.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: audit 2026-03-09T14:17:45.784057+0000 mon.a (mon.0) 190 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:17:51.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: audit 2026-03-09T14:17:45.784057+0000 mon.a (mon.0) 190 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:17:51.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: audit 2026-03-09T14:17:46.776723+0000 mon.a (mon.0) 191 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:51.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: audit 2026-03-09T14:17:46.776723+0000 mon.a (mon.0) 191 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:51.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: cluster 2026-03-09T14:17:46.910466+0000 mgr.x (mgr.14150) 49 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:51.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: cluster 2026-03-09T14:17:46.910466+0000 mgr.x (mgr.14150) 49 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:51.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: audit 2026-03-09T14:17:47.776609+0000 mon.a (mon.0) 192 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:51.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: audit 2026-03-09T14:17:47.776609+0000 mon.a (mon.0) 192 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:51.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: audit 2026-03-09T14:17:48.776819+0000 mon.a (mon.0) 193 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:51.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: audit 2026-03-09T14:17:48.776819+0000 mon.a (mon.0) 193 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:51.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: cluster 2026-03-09T14:17:48.910635+0000 mgr.x (mgr.14150) 50 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: cluster 2026-03-09T14:17:48.910635+0000 mgr.x (mgr.14150) 50 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: audit 2026-03-09T14:17:49.776986+0000 mon.a (mon.0) 194 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: audit 2026-03-09T14:17:49.776986+0000 mon.a (mon.0) 194 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: audit 2026-03-09T14:17:50.776826+0000 mon.a (mon.0) 195 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: audit 2026-03-09T14:17:50.776826+0000 mon.a (mon.0) 195 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: cluster 2026-03-09T14:17:50.784859+0000 mon.a (mon.0) 196 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: cluster 2026-03-09T14:17:50.784859+0000 mon.a (mon.0) 196 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: cluster 2026-03-09T14:17:50.787771+0000 mon.a (mon.0) 197 : cluster [DBG] monmap epoch 3 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: cluster 2026-03-09T14:17:50.787771+0000 mon.a (mon.0) 197 : cluster [DBG] monmap epoch 3 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: cluster 2026-03-09T14:17:50.787777+0000 mon.a (mon.0) 198 : cluster [DBG] fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: cluster 2026-03-09T14:17:50.787777+0000 mon.a (mon.0) 198 : cluster [DBG] fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: cluster 2026-03-09T14:17:50.787780+0000 mon.a (mon.0) 199 : cluster [DBG] last_changed 2026-03-09T14:17:45.777986+0000 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: cluster 2026-03-09T14:17:50.787780+0000 mon.a (mon.0) 199 : cluster [DBG] last_changed 2026-03-09T14:17:45.777986+0000 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: cluster 2026-03-09T14:17:50.787784+0000 mon.a (mon.0) 200 : cluster [DBG] created 2026-03-09T14:16:36.936076+0000 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: cluster 2026-03-09T14:17:50.787784+0000 mon.a (mon.0) 200 : cluster [DBG] created 2026-03-09T14:16:36.936076+0000 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: cluster 2026-03-09T14:17:50.787787+0000 mon.a (mon.0) 201 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: cluster 2026-03-09T14:17:50.787787+0000 mon.a (mon.0) 201 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: cluster 2026-03-09T14:17:50.787791+0000 mon.a (mon.0) 202 : cluster [DBG] election_strategy: 1 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: cluster 2026-03-09T14:17:50.787791+0000 mon.a (mon.0) 202 : cluster [DBG] election_strategy: 1 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: cluster 2026-03-09T14:17:50.787793+0000 mon.a (mon.0) 203 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: cluster 2026-03-09T14:17:50.787793+0000 mon.a (mon.0) 203 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: cluster 2026-03-09T14:17:50.787797+0000 mon.a (mon.0) 204 : cluster [DBG] 1: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.c 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: cluster 2026-03-09T14:17:50.787797+0000 mon.a (mon.0) 204 : cluster [DBG] 1: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.c 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: cluster 2026-03-09T14:17:50.787799+0000 mon.a (mon.0) 205 : cluster [DBG] 2: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: cluster 2026-03-09T14:17:50.787799+0000 mon.a (mon.0) 205 : cluster [DBG] 2: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: cluster 2026-03-09T14:17:50.788135+0000 mon.a (mon.0) 206 : cluster [DBG] fsmap 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: cluster 2026-03-09T14:17:50.788135+0000 mon.a (mon.0) 206 : cluster [DBG] fsmap 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: cluster 2026-03-09T14:17:50.788147+0000 mon.a (mon.0) 207 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: cluster 2026-03-09T14:17:50.788147+0000 mon.a (mon.0) 207 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: cluster 2026-03-09T14:17:50.788264+0000 mon.a (mon.0) 208 : cluster [DBG] mgrmap e12: x(active, since 51s) 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: cluster 2026-03-09T14:17:50.788264+0000 mon.a (mon.0) 208 : cluster [DBG] mgrmap e12: x(active, since 51s) 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: cluster 2026-03-09T14:17:50.788356+0000 mon.a (mon.0) 209 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN) 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: cluster 2026-03-09T14:17:50.788356+0000 mon.a (mon.0) 209 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN) 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: cluster 2026-03-09T14:17:50.791042+0000 mon.a (mon.0) 210 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,c 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: cluster 2026-03-09T14:17:50.791042+0000 mon.a (mon.0) 210 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,c 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: cluster 2026-03-09T14:17:50.791062+0000 mon.a (mon.0) 211 : cluster [WRN] [WRN] MON_DOWN: 1/3 mons down, quorum a,c 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: cluster 2026-03-09T14:17:50.791062+0000 mon.a (mon.0) 211 : cluster [WRN] [WRN] MON_DOWN: 1/3 mons down, quorum a,c 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: cluster 2026-03-09T14:17:50.791077+0000 mon.a (mon.0) 212 : cluster [WRN] mon.b (rank 2) addr [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] is down (out of quorum) 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: cluster 2026-03-09T14:17:50.791077+0000 mon.a (mon.0) 212 : cluster [WRN] mon.b (rank 2) addr [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] is down (out of quorum) 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: audit 2026-03-09T14:17:50.793267+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: audit 2026-03-09T14:17:50.793267+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: audit 2026-03-09T14:17:50.796210+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: audit 2026-03-09T14:17:50.796210+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: audit 2026-03-09T14:17:50.799389+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: audit 2026-03-09T14:17:50.799389+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: audit 2026-03-09T14:17:50.802098+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: audit 2026-03-09T14:17:50.802098+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: audit 2026-03-09T14:17:50.804721+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:51.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: audit 2026-03-09T14:17:50.804721+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:51.211 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: audit 2026-03-09T14:17:50.805412+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:51.211 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: audit 2026-03-09T14:17:50.805412+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:51.211 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: audit 2026-03-09T14:17:50.805966+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:17:51.211 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:50 vm03 bash[17524]: audit 2026-03-09T14:17:50.805966+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:17:51.233 INFO:tasks.cephadm:Generating final ceph.conf file... 2026-03-09T14:17:51.233 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph config generate-minimal-conf 2026-03-09T14:17:52.208 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: cephadm 2026-03-09T14:17:50.806565+0000 mgr.x (mgr.14150) 51 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-09T14:17:52.208 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: cephadm 2026-03-09T14:17:50.806565+0000 mgr.x (mgr.14150) 51 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-09T14:17:52.208 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: cephadm 2026-03-09T14:17:50.806669+0000 mgr.x (mgr.14150) 52 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: cephadm 2026-03-09T14:17:50.806669+0000 mgr.x (mgr.14150) 52 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: cephadm 2026-03-09T14:17:50.806761+0000 mgr.x (mgr.14150) 53 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: cephadm 2026-03-09T14:17:50.806761+0000 mgr.x (mgr.14150) 53 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: cephadm 2026-03-09T14:17:50.854468+0000 mgr.x (mgr.14150) 54 : cephadm [INF] Updating vm03:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.conf 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: cephadm 2026-03-09T14:17:50.854468+0000 mgr.x (mgr.14150) 54 : cephadm [INF] Updating vm03:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.conf 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: cephadm 2026-03-09T14:17:50.854543+0000 mgr.x (mgr.14150) 55 : cephadm [INF] Updating vm04:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.conf 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: cephadm 2026-03-09T14:17:50.854543+0000 mgr.x (mgr.14150) 55 : cephadm [INF] Updating vm04:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.conf 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: cephadm 2026-03-09T14:17:50.856323+0000 mgr.x (mgr.14150) 56 : cephadm [INF] Updating vm05:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.conf 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: cephadm 2026-03-09T14:17:50.856323+0000 mgr.x (mgr.14150) 56 : cephadm [INF] Updating vm05:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.conf 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: audit 2026-03-09T14:17:50.900466+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: audit 2026-03-09T14:17:50.900466+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: audit 2026-03-09T14:17:50.904602+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: audit 2026-03-09T14:17:50.904602+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: audit 2026-03-09T14:17:50.907599+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: audit 2026-03-09T14:17:50.907599+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: audit 2026-03-09T14:17:50.910746+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: audit 2026-03-09T14:17:50.910746+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: cluster 2026-03-09T14:17:50.910832+0000 mgr.x (mgr.14150) 57 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: cluster 2026-03-09T14:17:50.910832+0000 mgr.x (mgr.14150) 57 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: audit 2026-03-09T14:17:50.913554+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: audit 2026-03-09T14:17:50.913554+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: audit 2026-03-09T14:17:50.917222+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: audit 2026-03-09T14:17:50.917222+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: audit 2026-03-09T14:17:50.920582+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: audit 2026-03-09T14:17:50.920582+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: audit 2026-03-09T14:17:50.939791+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: audit 2026-03-09T14:17:50.939791+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: audit 2026-03-09T14:17:50.943035+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: audit 2026-03-09T14:17:50.943035+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: audit 2026-03-09T14:17:50.946135+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: audit 2026-03-09T14:17:50.946135+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: audit 2026-03-09T14:17:50.949015+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: audit 2026-03-09T14:17:50.949015+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: cephadm 2026-03-09T14:17:50.949321+0000 mgr.x (mgr.14150) 58 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: cephadm 2026-03-09T14:17:50.949321+0000 mgr.x (mgr.14150) 58 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: audit 2026-03-09T14:17:50.949535+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: audit 2026-03-09T14:17:50.949535+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: audit 2026-03-09T14:17:50.949990+0000 mon.a (mon.0) 232 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: audit 2026-03-09T14:17:50.949990+0000 mon.a (mon.0) 232 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: audit 2026-03-09T14:17:50.950547+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: audit 2026-03-09T14:17:50.950547+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: cephadm 2026-03-09T14:17:50.951040+0000 mgr.x (mgr.14150) 59 : cephadm [INF] Reconfiguring daemon mon.a on vm03 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: cephadm 2026-03-09T14:17:50.951040+0000 mgr.x (mgr.14150) 59 : cephadm [INF] Reconfiguring daemon mon.a on vm03 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: audit 2026-03-09T14:17:51.160720+0000 mon.a (mon.0) 234 : audit [DBG] from='client.? 192.168.123.105:0/324353046' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: audit 2026-03-09T14:17:51.160720+0000 mon.a (mon.0) 234 : audit [DBG] from='client.? 192.168.123.105:0/324353046' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: audit 2026-03-09T14:17:51.334576+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: audit 2026-03-09T14:17:51.334576+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: audit 2026-03-09T14:17:51.338280+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: audit 2026-03-09T14:17:51.338280+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: cephadm 2026-03-09T14:17:51.338995+0000 mgr.x (mgr.14150) 60 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: cephadm 2026-03-09T14:17:51.338995+0000 mgr.x (mgr.14150) 60 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: audit 2026-03-09T14:17:51.339184+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: audit 2026-03-09T14:17:51.339184+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: audit 2026-03-09T14:17:51.339654+0000 mon.a (mon.0) 238 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: audit 2026-03-09T14:17:51.339654+0000 mon.a (mon.0) 238 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: audit 2026-03-09T14:17:51.340059+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: audit 2026-03-09T14:17:51.340059+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:52.209 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: cephadm 2026-03-09T14:17:51.340521+0000 mgr.x (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mon.b on vm04 2026-03-09T14:17:52.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: cephadm 2026-03-09T14:17:51.340521+0000 mgr.x (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mon.b on vm04 2026-03-09T14:17:52.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: audit 2026-03-09T14:17:51.777023+0000 mon.a (mon.0) 240 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:52.210 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:51 vm03 bash[17524]: audit 2026-03-09T14:17:51.777023+0000 mon.a (mon.0) 240 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:52.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: cephadm 2026-03-09T14:17:50.806565+0000 mgr.x (mgr.14150) 51 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-09T14:17:52.264 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: cephadm 2026-03-09T14:17:50.806565+0000 mgr.x (mgr.14150) 51 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-09T14:17:52.264 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: cephadm 2026-03-09T14:17:50.806669+0000 mgr.x (mgr.14150) 52 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-09T14:17:52.264 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: cephadm 2026-03-09T14:17:50.806669+0000 mgr.x (mgr.14150) 52 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-09T14:17:52.264 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: cephadm 2026-03-09T14:17:50.806761+0000 mgr.x (mgr.14150) 53 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-09T14:17:52.264 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: cephadm 2026-03-09T14:17:50.806761+0000 mgr.x (mgr.14150) 53 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-09T14:17:52.264 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: cephadm 2026-03-09T14:17:50.854468+0000 mgr.x (mgr.14150) 54 : cephadm [INF] Updating vm03:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.conf 2026-03-09T14:17:52.264 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: cephadm 2026-03-09T14:17:50.854468+0000 mgr.x (mgr.14150) 54 : cephadm [INF] Updating vm03:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.conf 2026-03-09T14:17:52.264 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: cephadm 2026-03-09T14:17:50.854543+0000 mgr.x (mgr.14150) 55 : cephadm [INF] Updating vm04:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.conf 2026-03-09T14:17:52.264 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: cephadm 2026-03-09T14:17:50.854543+0000 mgr.x (mgr.14150) 55 : cephadm [INF] Updating vm04:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.conf 2026-03-09T14:17:52.264 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: cephadm 2026-03-09T14:17:50.856323+0000 mgr.x (mgr.14150) 56 : cephadm [INF] Updating vm05:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.conf 2026-03-09T14:17:52.264 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: cephadm 2026-03-09T14:17:50.856323+0000 mgr.x (mgr.14150) 56 : cephadm [INF] Updating vm05:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.conf 2026-03-09T14:17:52.264 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: audit 2026-03-09T14:17:50.900466+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.264 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: audit 2026-03-09T14:17:50.900466+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.264 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: audit 2026-03-09T14:17:50.904602+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.264 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: audit 2026-03-09T14:17:50.904602+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.264 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: audit 2026-03-09T14:17:50.907599+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.264 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: audit 2026-03-09T14:17:50.907599+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.264 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: audit 2026-03-09T14:17:50.910746+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.264 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: audit 2026-03-09T14:17:50.910746+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.264 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: cluster 2026-03-09T14:17:50.910832+0000 mgr.x (mgr.14150) 57 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:52.264 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: cluster 2026-03-09T14:17:50.910832+0000 mgr.x (mgr.14150) 57 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:52.264 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: audit 2026-03-09T14:17:50.913554+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.264 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: audit 2026-03-09T14:17:50.913554+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.264 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: audit 2026-03-09T14:17:50.917222+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.264 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: audit 2026-03-09T14:17:50.917222+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.264 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: audit 2026-03-09T14:17:50.920582+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.264 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: audit 2026-03-09T14:17:50.920582+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.264 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: audit 2026-03-09T14:17:50.939791+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.264 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: audit 2026-03-09T14:17:50.939791+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.264 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: audit 2026-03-09T14:17:50.943035+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.264 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: audit 2026-03-09T14:17:50.943035+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.264 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: audit 2026-03-09T14:17:50.946135+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.264 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: audit 2026-03-09T14:17:50.946135+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.264 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: audit 2026-03-09T14:17:50.949015+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.264 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: audit 2026-03-09T14:17:50.949015+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.264 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: cephadm 2026-03-09T14:17:50.949321+0000 mgr.x (mgr.14150) 58 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-09T14:17:52.264 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: cephadm 2026-03-09T14:17:50.949321+0000 mgr.x (mgr.14150) 58 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-09T14:17:52.264 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: audit 2026-03-09T14:17:50.949535+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:17:52.265 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: audit 2026-03-09T14:17:50.949535+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:17:52.265 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: audit 2026-03-09T14:17:50.949990+0000 mon.a (mon.0) 232 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:17:52.265 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: audit 2026-03-09T14:17:50.949990+0000 mon.a (mon.0) 232 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:17:52.265 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: audit 2026-03-09T14:17:50.950547+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:52.265 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: audit 2026-03-09T14:17:50.950547+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:52.265 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: cephadm 2026-03-09T14:17:50.951040+0000 mgr.x (mgr.14150) 59 : cephadm [INF] Reconfiguring daemon mon.a on vm03 2026-03-09T14:17:52.265 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: cephadm 2026-03-09T14:17:50.951040+0000 mgr.x (mgr.14150) 59 : cephadm [INF] Reconfiguring daemon mon.a on vm03 2026-03-09T14:17:52.265 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: audit 2026-03-09T14:17:51.160720+0000 mon.a (mon.0) 234 : audit [DBG] from='client.? 192.168.123.105:0/324353046' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:17:52.265 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: audit 2026-03-09T14:17:51.160720+0000 mon.a (mon.0) 234 : audit [DBG] from='client.? 192.168.123.105:0/324353046' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:17:52.265 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: audit 2026-03-09T14:17:51.334576+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.265 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: audit 2026-03-09T14:17:51.334576+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.265 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: audit 2026-03-09T14:17:51.338280+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.265 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: audit 2026-03-09T14:17:51.338280+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:52.265 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: cephadm 2026-03-09T14:17:51.338995+0000 mgr.x (mgr.14150) 60 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T14:17:52.265 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: cephadm 2026-03-09T14:17:51.338995+0000 mgr.x (mgr.14150) 60 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T14:17:52.265 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: audit 2026-03-09T14:17:51.339184+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:17:52.265 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: audit 2026-03-09T14:17:51.339184+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:17:52.265 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: audit 2026-03-09T14:17:51.339654+0000 mon.a (mon.0) 238 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:17:52.265 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: audit 2026-03-09T14:17:51.339654+0000 mon.a (mon.0) 238 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:17:52.265 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: audit 2026-03-09T14:17:51.340059+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:52.265 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: audit 2026-03-09T14:17:51.340059+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:52.265 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: cephadm 2026-03-09T14:17:51.340521+0000 mgr.x (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mon.b on vm04 2026-03-09T14:17:52.265 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: cephadm 2026-03-09T14:17:51.340521+0000 mgr.x (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mon.b on vm04 2026-03-09T14:17:52.265 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: audit 2026-03-09T14:17:51.777023+0000 mon.a (mon.0) 240 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:52.265 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:51 vm05 bash[20070]: audit 2026-03-09T14:17:51.777023+0000 mon.a (mon.0) 240 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:53.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cephadm 2026-03-09T14:17:40.151139+0000 mgr.x (mgr.14150) 45 : cephadm [INF] Deploying daemon mon.b on vm04 2026-03-09T14:17:53.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cephadm 2026-03-09T14:17:40.151139+0000 mgr.x (mgr.14150) 45 : cephadm [INF] Deploying daemon mon.b on vm04 2026-03-09T14:17:53.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:40.382122+0000 mon.a (mon.0) 155 : cluster [INF] mon.a calling monitor election 2026-03-09T14:17:53.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:40.382122+0000 mon.a (mon.0) 155 : cluster [INF] mon.a calling monitor election 2026-03-09T14:17:53.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:40.383582+0000 mon.a (mon.0) 156 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:17:53.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:40.383582+0000 mon.a (mon.0) 156 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:17:53.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:40.383700+0000 mon.a (mon.0) 157 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:17:53.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:40.383700+0000 mon.a (mon.0) 157 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:17:53.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:40.390403+0000 mon.a (mon.0) 158 : audit [DBG] from='client.? 192.168.123.105:0/3517087937' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:17:53.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:40.390403+0000 mon.a (mon.0) 158 : audit [DBG] from='client.? 192.168.123.105:0/3517087937' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:17:53.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:40.909977+0000 mgr.x (mgr.14150) 46 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:53.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:40.909977+0000 mgr.x (mgr.14150) 46 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:41.378729+0000 mon.a (mon.0) 159 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:41.378729+0000 mon.a (mon.0) 159 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:41.776349+0000 mon.a (mon.0) 160 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:41.776349+0000 mon.a (mon.0) 160 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:42.378972+0000 mon.a (mon.0) 161 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:42.378972+0000 mon.a (mon.0) 161 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:42.379592+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:42.379592+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:42.776213+0000 mon.a (mon.0) 162 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:42.776213+0000 mon.a (mon.0) 162 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:42.910135+0000 mgr.x (mgr.14150) 47 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:42.910135+0000 mgr.x (mgr.14150) 47 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:43.379185+0000 mon.a (mon.0) 163 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:43.379185+0000 mon.a (mon.0) 163 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:43.776255+0000 mon.a (mon.0) 164 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:43.776255+0000 mon.a (mon.0) 164 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:44.379003+0000 mon.a (mon.0) 165 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:44.379003+0000 mon.a (mon.0) 165 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:44.776601+0000 mon.a (mon.0) 166 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:44.776601+0000 mon.a (mon.0) 166 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:45.379316+0000 mon.a (mon.0) 167 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:45.379316+0000 mon.a (mon.0) 167 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:45.387803+0000 mon.a (mon.0) 168 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:45.387803+0000 mon.a (mon.0) 168 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:45.392190+0000 mon.a (mon.0) 169 : cluster [DBG] monmap epoch 2 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:45.392190+0000 mon.a (mon.0) 169 : cluster [DBG] monmap epoch 2 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:45.392235+0000 mon.a (mon.0) 170 : cluster [DBG] fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:45.392235+0000 mon.a (mon.0) 170 : cluster [DBG] fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:45.392276+0000 mon.a (mon.0) 171 : cluster [DBG] last_changed 2026-03-09T14:17:40.379561+0000 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:45.392276+0000 mon.a (mon.0) 171 : cluster [DBG] last_changed 2026-03-09T14:17:40.379561+0000 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:45.392316+0000 mon.a (mon.0) 172 : cluster [DBG] created 2026-03-09T14:16:36.936076+0000 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:45.392316+0000 mon.a (mon.0) 172 : cluster [DBG] created 2026-03-09T14:16:36.936076+0000 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:45.392356+0000 mon.a (mon.0) 173 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:45.392356+0000 mon.a (mon.0) 173 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:45.392397+0000 mon.a (mon.0) 174 : cluster [DBG] election_strategy: 1 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:45.392397+0000 mon.a (mon.0) 174 : cluster [DBG] election_strategy: 1 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:45.392437+0000 mon.a (mon.0) 175 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:45.392437+0000 mon.a (mon.0) 175 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:45.392478+0000 mon.a (mon.0) 176 : cluster [DBG] 1: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.c 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:45.392478+0000 mon.a (mon.0) 176 : cluster [DBG] 1: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.c 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:45.392848+0000 mon.a (mon.0) 177 : cluster [DBG] fsmap 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:45.392848+0000 mon.a (mon.0) 177 : cluster [DBG] fsmap 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:45.392933+0000 mon.a (mon.0) 178 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:45.392933+0000 mon.a (mon.0) 178 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:45.393165+0000 mon.a (mon.0) 179 : cluster [DBG] mgrmap e12: x(active, since 46s) 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:45.393165+0000 mon.a (mon.0) 179 : cluster [DBG] mgrmap e12: x(active, since 46s) 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:45.393456+0000 mon.a (mon.0) 180 : cluster [INF] overall HEALTH_OK 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:45.393456+0000 mon.a (mon.0) 180 : cluster [INF] overall HEALTH_OK 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:45.399933+0000 mon.a (mon.0) 181 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:45.399933+0000 mon.a (mon.0) 181 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:45.403794+0000 mon.a (mon.0) 182 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:45.403794+0000 mon.a (mon.0) 182 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:45.411056+0000 mon.a (mon.0) 183 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:45.411056+0000 mon.a (mon.0) 183 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:45.416104+0000 mon.a (mon.0) 184 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:45.416104+0000 mon.a (mon.0) 184 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:45.431399+0000 mon.a (mon.0) 185 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:45.431399+0000 mon.a (mon.0) 185 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:17:53.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:45.782521+0000 mon.a (mon.0) 187 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:45.782521+0000 mon.a (mon.0) 187 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:45.782704+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:45.782704+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:45.782763+0000 mon.a (mon.0) 188 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:45.782763+0000 mon.a (mon.0) 188 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:45.782956+0000 mon.a (mon.0) 189 : cluster [INF] mon.a calling monitor election 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:45.782956+0000 mon.a (mon.0) 189 : cluster [INF] mon.a calling monitor election 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:45.784057+0000 mon.a (mon.0) 190 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:45.784057+0000 mon.a (mon.0) 190 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:46.776723+0000 mon.a (mon.0) 191 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:46.776723+0000 mon.a (mon.0) 191 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:46.910466+0000 mgr.x (mgr.14150) 49 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:46.910466+0000 mgr.x (mgr.14150) 49 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:47.776609+0000 mon.a (mon.0) 192 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:47.776609+0000 mon.a (mon.0) 192 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:48.776819+0000 mon.a (mon.0) 193 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:48.776819+0000 mon.a (mon.0) 193 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:48.910635+0000 mgr.x (mgr.14150) 50 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:48.910635+0000 mgr.x (mgr.14150) 50 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:49.776986+0000 mon.a (mon.0) 194 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:49.776986+0000 mon.a (mon.0) 194 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.776826+0000 mon.a (mon.0) 195 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.776826+0000 mon.a (mon.0) 195 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:50.784859+0000 mon.a (mon.0) 196 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:50.784859+0000 mon.a (mon.0) 196 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:50.787771+0000 mon.a (mon.0) 197 : cluster [DBG] monmap epoch 3 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:50.787771+0000 mon.a (mon.0) 197 : cluster [DBG] monmap epoch 3 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:50.787777+0000 mon.a (mon.0) 198 : cluster [DBG] fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:50.787777+0000 mon.a (mon.0) 198 : cluster [DBG] fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:50.787780+0000 mon.a (mon.0) 199 : cluster [DBG] last_changed 2026-03-09T14:17:45.777986+0000 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:50.787780+0000 mon.a (mon.0) 199 : cluster [DBG] last_changed 2026-03-09T14:17:45.777986+0000 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:50.787784+0000 mon.a (mon.0) 200 : cluster [DBG] created 2026-03-09T14:16:36.936076+0000 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:50.787784+0000 mon.a (mon.0) 200 : cluster [DBG] created 2026-03-09T14:16:36.936076+0000 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:50.787787+0000 mon.a (mon.0) 201 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:50.787787+0000 mon.a (mon.0) 201 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:50.787791+0000 mon.a (mon.0) 202 : cluster [DBG] election_strategy: 1 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:50.787791+0000 mon.a (mon.0) 202 : cluster [DBG] election_strategy: 1 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:50.787793+0000 mon.a (mon.0) 203 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:50.787793+0000 mon.a (mon.0) 203 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:50.787797+0000 mon.a (mon.0) 204 : cluster [DBG] 1: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.c 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:50.787797+0000 mon.a (mon.0) 204 : cluster [DBG] 1: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.c 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:50.787799+0000 mon.a (mon.0) 205 : cluster [DBG] 2: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:50.787799+0000 mon.a (mon.0) 205 : cluster [DBG] 2: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:50.788135+0000 mon.a (mon.0) 206 : cluster [DBG] fsmap 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:50.788135+0000 mon.a (mon.0) 206 : cluster [DBG] fsmap 2026-03-09T14:17:53.264 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:50.788147+0000 mon.a (mon.0) 207 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:50.788147+0000 mon.a (mon.0) 207 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:50.788264+0000 mon.a (mon.0) 208 : cluster [DBG] mgrmap e12: x(active, since 51s) 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:50.788264+0000 mon.a (mon.0) 208 : cluster [DBG] mgrmap e12: x(active, since 51s) 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:50.788356+0000 mon.a (mon.0) 209 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN) 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:50.788356+0000 mon.a (mon.0) 209 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN) 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:50.791042+0000 mon.a (mon.0) 210 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,c 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:50.791042+0000 mon.a (mon.0) 210 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,c 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:50.791062+0000 mon.a (mon.0) 211 : cluster [WRN] [WRN] MON_DOWN: 1/3 mons down, quorum a,c 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:50.791062+0000 mon.a (mon.0) 211 : cluster [WRN] [WRN] MON_DOWN: 1/3 mons down, quorum a,c 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:50.791077+0000 mon.a (mon.0) 212 : cluster [WRN] mon.b (rank 2) addr [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] is down (out of quorum) 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:50.791077+0000 mon.a (mon.0) 212 : cluster [WRN] mon.b (rank 2) addr [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] is down (out of quorum) 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.793267+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.793267+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.796210+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.796210+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.799389+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.799389+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.802098+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.802098+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.804721+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.804721+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.805412+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.805412+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.805966+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.805966+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cephadm 2026-03-09T14:17:50.806565+0000 mgr.x (mgr.14150) 51 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cephadm 2026-03-09T14:17:50.806565+0000 mgr.x (mgr.14150) 51 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cephadm 2026-03-09T14:17:50.806669+0000 mgr.x (mgr.14150) 52 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cephadm 2026-03-09T14:17:50.806669+0000 mgr.x (mgr.14150) 52 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cephadm 2026-03-09T14:17:50.806761+0000 mgr.x (mgr.14150) 53 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cephadm 2026-03-09T14:17:50.806761+0000 mgr.x (mgr.14150) 53 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cephadm 2026-03-09T14:17:50.854468+0000 mgr.x (mgr.14150) 54 : cephadm [INF] Updating vm03:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.conf 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cephadm 2026-03-09T14:17:50.854468+0000 mgr.x (mgr.14150) 54 : cephadm [INF] Updating vm03:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.conf 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cephadm 2026-03-09T14:17:50.854543+0000 mgr.x (mgr.14150) 55 : cephadm [INF] Updating vm04:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.conf 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cephadm 2026-03-09T14:17:50.854543+0000 mgr.x (mgr.14150) 55 : cephadm [INF] Updating vm04:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.conf 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cephadm 2026-03-09T14:17:50.856323+0000 mgr.x (mgr.14150) 56 : cephadm [INF] Updating vm05:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.conf 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cephadm 2026-03-09T14:17:50.856323+0000 mgr.x (mgr.14150) 56 : cephadm [INF] Updating vm05:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/config/ceph.conf 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.900466+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.900466+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.904602+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.904602+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.907599+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.907599+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.910746+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.910746+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:50.910832+0000 mgr.x (mgr.14150) 57 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cluster 2026-03-09T14:17:50.910832+0000 mgr.x (mgr.14150) 57 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.913554+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.913554+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.917222+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.917222+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.920582+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.920582+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.939791+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.939791+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.943035+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.943035+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.946135+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.946135+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.949015+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.949015+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cephadm 2026-03-09T14:17:50.949321+0000 mgr.x (mgr.14150) 58 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-09T14:17:53.266 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cephadm 2026-03-09T14:17:50.949321+0000 mgr.x (mgr.14150) 58 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-09T14:17:53.266 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.949535+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:17:53.266 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.949535+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:17:53.266 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.949990+0000 mon.a (mon.0) 232 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:17:53.266 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.949990+0000 mon.a (mon.0) 232 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:17:53.266 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.950547+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:53.266 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:50.950547+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:53.266 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cephadm 2026-03-09T14:17:50.951040+0000 mgr.x (mgr.14150) 59 : cephadm [INF] Reconfiguring daemon mon.a on vm03 2026-03-09T14:17:53.266 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cephadm 2026-03-09T14:17:50.951040+0000 mgr.x (mgr.14150) 59 : cephadm [INF] Reconfiguring daemon mon.a on vm03 2026-03-09T14:17:53.266 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:51.160720+0000 mon.a (mon.0) 234 : audit [DBG] from='client.? 192.168.123.105:0/324353046' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:17:53.266 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:51.160720+0000 mon.a (mon.0) 234 : audit [DBG] from='client.? 192.168.123.105:0/324353046' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:17:53.266 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:51.334576+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.266 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:51.334576+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.266 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:51.338280+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.266 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:51.338280+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:17:53.266 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cephadm 2026-03-09T14:17:51.338995+0000 mgr.x (mgr.14150) 60 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T14:17:53.266 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cephadm 2026-03-09T14:17:51.338995+0000 mgr.x (mgr.14150) 60 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T14:17:53.266 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:51.339184+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:17:53.266 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:51.339184+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:17:53.266 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:51.339654+0000 mon.a (mon.0) 238 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:17:53.266 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:51.339654+0000 mon.a (mon.0) 238 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:17:53.266 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:51.340059+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:53.266 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:51.340059+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:53.266 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cephadm 2026-03-09T14:17:51.340521+0000 mgr.x (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mon.b on vm04 2026-03-09T14:17:53.266 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: cephadm 2026-03-09T14:17:51.340521+0000 mgr.x (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mon.b on vm04 2026-03-09T14:17:53.266 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:51.777023+0000 mon.a (mon.0) 240 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:53.266 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:52 vm04 bash[19581]: audit 2026-03-09T14:17:51.777023+0000 mon.a (mon.0) 240 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:53.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:53 vm04 bash[19581]: cluster 2026-03-09T14:17:47.778690+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T14:17:53.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:53 vm04 bash[19581]: cluster 2026-03-09T14:17:47.778690+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T14:17:53.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:53 vm04 bash[19581]: cluster 2026-03-09T14:17:52.794019+0000 mon.c (mon.1) 3 : cluster [INF] mon.c calling monitor election 2026-03-09T14:17:53.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:53 vm04 bash[19581]: cluster 2026-03-09T14:17:52.794019+0000 mon.c (mon.1) 3 : cluster [INF] mon.c calling monitor election 2026-03-09T14:17:53.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:53 vm04 bash[19581]: cluster 2026-03-09T14:17:52.794039+0000 mon.b (mon.2) 2 : cluster [INF] mon.b calling monitor election 2026-03-09T14:17:53.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:53 vm04 bash[19581]: cluster 2026-03-09T14:17:52.794039+0000 mon.b (mon.2) 2 : cluster [INF] mon.b calling monitor election 2026-03-09T14:17:53.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:53 vm04 bash[19581]: cluster 2026-03-09T14:17:52.795546+0000 mon.a (mon.0) 253 : cluster [INF] mon.a calling monitor election 2026-03-09T14:17:53.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:53 vm04 bash[19581]: cluster 2026-03-09T14:17:52.795546+0000 mon.a (mon.0) 253 : cluster [INF] mon.a calling monitor election 2026-03-09T14:17:53.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:53 vm04 bash[19581]: cluster 2026-03-09T14:17:52.797517+0000 mon.a (mon.0) 254 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T14:17:53.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:53 vm04 bash[19581]: cluster 2026-03-09T14:17:52.797517+0000 mon.a (mon.0) 254 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T14:17:53.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:53 vm04 bash[19581]: cluster 2026-03-09T14:17:52.800399+0000 mon.a (mon.0) 255 : cluster [DBG] monmap epoch 3 2026-03-09T14:17:53.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:53 vm04 bash[19581]: cluster 2026-03-09T14:17:52.800399+0000 mon.a (mon.0) 255 : cluster [DBG] monmap epoch 3 2026-03-09T14:17:53.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:53 vm04 bash[19581]: cluster 2026-03-09T14:17:52.800416+0000 mon.a (mon.0) 256 : cluster [DBG] fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:17:53.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:53 vm04 bash[19581]: cluster 2026-03-09T14:17:52.800416+0000 mon.a (mon.0) 256 : cluster [DBG] fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:17:53.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:53 vm04 bash[19581]: cluster 2026-03-09T14:17:52.800425+0000 mon.a (mon.0) 257 : cluster [DBG] last_changed 2026-03-09T14:17:45.777986+0000 2026-03-09T14:17:53.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:53 vm04 bash[19581]: cluster 2026-03-09T14:17:52.800425+0000 mon.a (mon.0) 257 : cluster [DBG] last_changed 2026-03-09T14:17:45.777986+0000 2026-03-09T14:17:53.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:53 vm04 bash[19581]: cluster 2026-03-09T14:17:52.800434+0000 mon.a (mon.0) 258 : cluster [DBG] created 2026-03-09T14:16:36.936076+0000 2026-03-09T14:17:53.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:53 vm04 bash[19581]: cluster 2026-03-09T14:17:52.800434+0000 mon.a (mon.0) 258 : cluster [DBG] created 2026-03-09T14:16:36.936076+0000 2026-03-09T14:17:53.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:53 vm04 bash[19581]: cluster 2026-03-09T14:17:52.800442+0000 mon.a (mon.0) 259 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T14:17:53.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:53 vm04 bash[19581]: cluster 2026-03-09T14:17:52.800442+0000 mon.a (mon.0) 259 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T14:17:53.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:53 vm04 bash[19581]: cluster 2026-03-09T14:17:52.800450+0000 mon.a (mon.0) 260 : cluster [DBG] election_strategy: 1 2026-03-09T14:17:53.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:53 vm04 bash[19581]: cluster 2026-03-09T14:17:52.800450+0000 mon.a (mon.0) 260 : cluster [DBG] election_strategy: 1 2026-03-09T14:17:53.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:53 vm04 bash[19581]: cluster 2026-03-09T14:17:52.800459+0000 mon.a (mon.0) 261 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T14:17:53.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:53 vm04 bash[19581]: cluster 2026-03-09T14:17:52.800459+0000 mon.a (mon.0) 261 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T14:17:53.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:53 vm04 bash[19581]: cluster 2026-03-09T14:17:52.800467+0000 mon.a (mon.0) 262 : cluster [DBG] 1: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.c 2026-03-09T14:17:53.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:53 vm04 bash[19581]: cluster 2026-03-09T14:17:52.800467+0000 mon.a (mon.0) 262 : cluster [DBG] 1: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.c 2026-03-09T14:17:53.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:53 vm04 bash[19581]: cluster 2026-03-09T14:17:52.800475+0000 mon.a (mon.0) 263 : cluster [DBG] 2: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-09T14:17:53.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:53 vm04 bash[19581]: cluster 2026-03-09T14:17:52.800475+0000 mon.a (mon.0) 263 : cluster [DBG] 2: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-09T14:17:53.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:53 vm04 bash[19581]: cluster 2026-03-09T14:17:52.800702+0000 mon.a (mon.0) 264 : cluster [DBG] fsmap 2026-03-09T14:17:53.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:53 vm04 bash[19581]: cluster 2026-03-09T14:17:52.800702+0000 mon.a (mon.0) 264 : cluster [DBG] fsmap 2026-03-09T14:17:53.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:53 vm04 bash[19581]: cluster 2026-03-09T14:17:52.800720+0000 mon.a (mon.0) 265 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T14:17:53.763 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:53 vm04 bash[19581]: cluster 2026-03-09T14:17:52.800720+0000 mon.a (mon.0) 265 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T14:17:53.763 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:53 vm04 bash[19581]: cluster 2026-03-09T14:17:52.800824+0000 mon.a (mon.0) 266 : cluster [DBG] mgrmap e12: x(active, since 53s) 2026-03-09T14:17:53.763 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:53 vm04 bash[19581]: cluster 2026-03-09T14:17:52.800824+0000 mon.a (mon.0) 266 : cluster [DBG] mgrmap e12: x(active, since 53s) 2026-03-09T14:17:53.763 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:53 vm04 bash[19581]: cluster 2026-03-09T14:17:52.800899+0000 mon.a (mon.0) 267 : cluster [INF] Health check cleared: MON_DOWN (was: 1/3 mons down, quorum a,c) 2026-03-09T14:17:53.763 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:53 vm04 bash[19581]: cluster 2026-03-09T14:17:52.800899+0000 mon.a (mon.0) 267 : cluster [INF] Health check cleared: MON_DOWN (was: 1/3 mons down, quorum a,c) 2026-03-09T14:17:53.763 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:53 vm04 bash[19581]: cluster 2026-03-09T14:17:52.800910+0000 mon.a (mon.0) 268 : cluster [INF] Cluster is now healthy 2026-03-09T14:17:53.763 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:53 vm04 bash[19581]: cluster 2026-03-09T14:17:52.800910+0000 mon.a (mon.0) 268 : cluster [INF] Cluster is now healthy 2026-03-09T14:17:53.763 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:53 vm04 bash[19581]: cluster 2026-03-09T14:17:52.803259+0000 mon.a (mon.0) 269 : cluster [INF] overall HEALTH_OK 2026-03-09T14:17:53.763 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:53 vm04 bash[19581]: cluster 2026-03-09T14:17:52.803259+0000 mon.a (mon.0) 269 : cluster [INF] overall HEALTH_OK 2026-03-09T14:17:53.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:53 vm05 bash[20070]: cluster 2026-03-09T14:17:47.778690+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T14:17:53.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:53 vm05 bash[20070]: cluster 2026-03-09T14:17:47.778690+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T14:17:53.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:53 vm05 bash[20070]: cluster 2026-03-09T14:17:52.794019+0000 mon.c (mon.1) 3 : cluster [INF] mon.c calling monitor election 2026-03-09T14:17:53.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:53 vm05 bash[20070]: cluster 2026-03-09T14:17:52.794019+0000 mon.c (mon.1) 3 : cluster [INF] mon.c calling monitor election 2026-03-09T14:17:53.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:53 vm05 bash[20070]: cluster 2026-03-09T14:17:52.794039+0000 mon.b (mon.2) 2 : cluster [INF] mon.b calling monitor election 2026-03-09T14:17:53.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:53 vm05 bash[20070]: cluster 2026-03-09T14:17:52.794039+0000 mon.b (mon.2) 2 : cluster [INF] mon.b calling monitor election 2026-03-09T14:17:53.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:53 vm05 bash[20070]: cluster 2026-03-09T14:17:52.795546+0000 mon.a (mon.0) 253 : cluster [INF] mon.a calling monitor election 2026-03-09T14:17:53.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:53 vm05 bash[20070]: cluster 2026-03-09T14:17:52.795546+0000 mon.a (mon.0) 253 : cluster [INF] mon.a calling monitor election 2026-03-09T14:17:53.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:53 vm05 bash[20070]: cluster 2026-03-09T14:17:52.797517+0000 mon.a (mon.0) 254 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T14:17:53.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:53 vm05 bash[20070]: cluster 2026-03-09T14:17:52.797517+0000 mon.a (mon.0) 254 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T14:17:53.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:53 vm05 bash[20070]: cluster 2026-03-09T14:17:52.800399+0000 mon.a (mon.0) 255 : cluster [DBG] monmap epoch 3 2026-03-09T14:17:53.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:53 vm05 bash[20070]: cluster 2026-03-09T14:17:52.800399+0000 mon.a (mon.0) 255 : cluster [DBG] monmap epoch 3 2026-03-09T14:17:53.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:53 vm05 bash[20070]: cluster 2026-03-09T14:17:52.800416+0000 mon.a (mon.0) 256 : cluster [DBG] fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:17:53.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:53 vm05 bash[20070]: cluster 2026-03-09T14:17:52.800416+0000 mon.a (mon.0) 256 : cluster [DBG] fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:17:53.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:53 vm05 bash[20070]: cluster 2026-03-09T14:17:52.800425+0000 mon.a (mon.0) 257 : cluster [DBG] last_changed 2026-03-09T14:17:45.777986+0000 2026-03-09T14:17:53.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:53 vm05 bash[20070]: cluster 2026-03-09T14:17:52.800425+0000 mon.a (mon.0) 257 : cluster [DBG] last_changed 2026-03-09T14:17:45.777986+0000 2026-03-09T14:17:53.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:53 vm05 bash[20070]: cluster 2026-03-09T14:17:52.800434+0000 mon.a (mon.0) 258 : cluster [DBG] created 2026-03-09T14:16:36.936076+0000 2026-03-09T14:17:53.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:53 vm05 bash[20070]: cluster 2026-03-09T14:17:52.800434+0000 mon.a (mon.0) 258 : cluster [DBG] created 2026-03-09T14:16:36.936076+0000 2026-03-09T14:17:53.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:53 vm05 bash[20070]: cluster 2026-03-09T14:17:52.800442+0000 mon.a (mon.0) 259 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T14:17:53.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:53 vm05 bash[20070]: cluster 2026-03-09T14:17:52.800442+0000 mon.a (mon.0) 259 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T14:17:53.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:53 vm05 bash[20070]: cluster 2026-03-09T14:17:52.800450+0000 mon.a (mon.0) 260 : cluster [DBG] election_strategy: 1 2026-03-09T14:17:53.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:53 vm05 bash[20070]: cluster 2026-03-09T14:17:52.800450+0000 mon.a (mon.0) 260 : cluster [DBG] election_strategy: 1 2026-03-09T14:17:53.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:53 vm05 bash[20070]: cluster 2026-03-09T14:17:52.800459+0000 mon.a (mon.0) 261 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T14:17:53.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:53 vm05 bash[20070]: cluster 2026-03-09T14:17:52.800459+0000 mon.a (mon.0) 261 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T14:17:53.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:53 vm05 bash[20070]: cluster 2026-03-09T14:17:52.800467+0000 mon.a (mon.0) 262 : cluster [DBG] 1: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.c 2026-03-09T14:17:53.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:53 vm05 bash[20070]: cluster 2026-03-09T14:17:52.800467+0000 mon.a (mon.0) 262 : cluster [DBG] 1: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.c 2026-03-09T14:17:53.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:53 vm05 bash[20070]: cluster 2026-03-09T14:17:52.800475+0000 mon.a (mon.0) 263 : cluster [DBG] 2: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-09T14:17:53.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:53 vm05 bash[20070]: cluster 2026-03-09T14:17:52.800475+0000 mon.a (mon.0) 263 : cluster [DBG] 2: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-09T14:17:53.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:53 vm05 bash[20070]: cluster 2026-03-09T14:17:52.800702+0000 mon.a (mon.0) 264 : cluster [DBG] fsmap 2026-03-09T14:17:53.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:53 vm05 bash[20070]: cluster 2026-03-09T14:17:52.800702+0000 mon.a (mon.0) 264 : cluster [DBG] fsmap 2026-03-09T14:17:53.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:53 vm05 bash[20070]: cluster 2026-03-09T14:17:52.800720+0000 mon.a (mon.0) 265 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T14:17:53.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:53 vm05 bash[20070]: cluster 2026-03-09T14:17:52.800720+0000 mon.a (mon.0) 265 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T14:17:53.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:53 vm05 bash[20070]: cluster 2026-03-09T14:17:52.800824+0000 mon.a (mon.0) 266 : cluster [DBG] mgrmap e12: x(active, since 53s) 2026-03-09T14:17:53.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:53 vm05 bash[20070]: cluster 2026-03-09T14:17:52.800824+0000 mon.a (mon.0) 266 : cluster [DBG] mgrmap e12: x(active, since 53s) 2026-03-09T14:17:53.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:53 vm05 bash[20070]: cluster 2026-03-09T14:17:52.800899+0000 mon.a (mon.0) 267 : cluster [INF] Health check cleared: MON_DOWN (was: 1/3 mons down, quorum a,c) 2026-03-09T14:17:53.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:53 vm05 bash[20070]: cluster 2026-03-09T14:17:52.800899+0000 mon.a (mon.0) 267 : cluster [INF] Health check cleared: MON_DOWN (was: 1/3 mons down, quorum a,c) 2026-03-09T14:17:53.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:53 vm05 bash[20070]: cluster 2026-03-09T14:17:52.800910+0000 mon.a (mon.0) 268 : cluster [INF] Cluster is now healthy 2026-03-09T14:17:53.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:53 vm05 bash[20070]: cluster 2026-03-09T14:17:52.800910+0000 mon.a (mon.0) 268 : cluster [INF] Cluster is now healthy 2026-03-09T14:17:53.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:53 vm05 bash[20070]: cluster 2026-03-09T14:17:52.803259+0000 mon.a (mon.0) 269 : cluster [INF] overall HEALTH_OK 2026-03-09T14:17:53.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:53 vm05 bash[20070]: cluster 2026-03-09T14:17:52.803259+0000 mon.a (mon.0) 269 : cluster [INF] overall HEALTH_OK 2026-03-09T14:17:53.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:53 vm03 bash[17524]: cluster 2026-03-09T14:17:47.778690+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T14:17:53.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:53 vm03 bash[17524]: cluster 2026-03-09T14:17:47.778690+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T14:17:53.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:53 vm03 bash[17524]: cluster 2026-03-09T14:17:52.794019+0000 mon.c (mon.1) 3 : cluster [INF] mon.c calling monitor election 2026-03-09T14:17:53.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:53 vm03 bash[17524]: cluster 2026-03-09T14:17:52.794019+0000 mon.c (mon.1) 3 : cluster [INF] mon.c calling monitor election 2026-03-09T14:17:53.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:53 vm03 bash[17524]: cluster 2026-03-09T14:17:52.794039+0000 mon.b (mon.2) 2 : cluster [INF] mon.b calling monitor election 2026-03-09T14:17:53.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:53 vm03 bash[17524]: cluster 2026-03-09T14:17:52.794039+0000 mon.b (mon.2) 2 : cluster [INF] mon.b calling monitor election 2026-03-09T14:17:53.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:53 vm03 bash[17524]: cluster 2026-03-09T14:17:52.795546+0000 mon.a (mon.0) 253 : cluster [INF] mon.a calling monitor election 2026-03-09T14:17:53.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:53 vm03 bash[17524]: cluster 2026-03-09T14:17:52.795546+0000 mon.a (mon.0) 253 : cluster [INF] mon.a calling monitor election 2026-03-09T14:17:53.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:53 vm03 bash[17524]: cluster 2026-03-09T14:17:52.797517+0000 mon.a (mon.0) 254 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T14:17:53.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:53 vm03 bash[17524]: cluster 2026-03-09T14:17:52.797517+0000 mon.a (mon.0) 254 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T14:17:53.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:53 vm03 bash[17524]: cluster 2026-03-09T14:17:52.800399+0000 mon.a (mon.0) 255 : cluster [DBG] monmap epoch 3 2026-03-09T14:17:53.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:53 vm03 bash[17524]: cluster 2026-03-09T14:17:52.800399+0000 mon.a (mon.0) 255 : cluster [DBG] monmap epoch 3 2026-03-09T14:17:53.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:53 vm03 bash[17524]: cluster 2026-03-09T14:17:52.800416+0000 mon.a (mon.0) 256 : cluster [DBG] fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:17:53.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:53 vm03 bash[17524]: cluster 2026-03-09T14:17:52.800416+0000 mon.a (mon.0) 256 : cluster [DBG] fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:17:53.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:53 vm03 bash[17524]: cluster 2026-03-09T14:17:52.800425+0000 mon.a (mon.0) 257 : cluster [DBG] last_changed 2026-03-09T14:17:45.777986+0000 2026-03-09T14:17:53.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:53 vm03 bash[17524]: cluster 2026-03-09T14:17:52.800425+0000 mon.a (mon.0) 257 : cluster [DBG] last_changed 2026-03-09T14:17:45.777986+0000 2026-03-09T14:17:53.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:53 vm03 bash[17524]: cluster 2026-03-09T14:17:52.800434+0000 mon.a (mon.0) 258 : cluster [DBG] created 2026-03-09T14:16:36.936076+0000 2026-03-09T14:17:53.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:53 vm03 bash[17524]: cluster 2026-03-09T14:17:52.800434+0000 mon.a (mon.0) 258 : cluster [DBG] created 2026-03-09T14:16:36.936076+0000 2026-03-09T14:17:53.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:53 vm03 bash[17524]: cluster 2026-03-09T14:17:52.800442+0000 mon.a (mon.0) 259 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T14:17:53.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:53 vm03 bash[17524]: cluster 2026-03-09T14:17:52.800442+0000 mon.a (mon.0) 259 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T14:17:53.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:53 vm03 bash[17524]: cluster 2026-03-09T14:17:52.800450+0000 mon.a (mon.0) 260 : cluster [DBG] election_strategy: 1 2026-03-09T14:17:53.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:53 vm03 bash[17524]: cluster 2026-03-09T14:17:52.800450+0000 mon.a (mon.0) 260 : cluster [DBG] election_strategy: 1 2026-03-09T14:17:53.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:53 vm03 bash[17524]: cluster 2026-03-09T14:17:52.800459+0000 mon.a (mon.0) 261 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T14:17:53.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:53 vm03 bash[17524]: cluster 2026-03-09T14:17:52.800459+0000 mon.a (mon.0) 261 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T14:17:53.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:53 vm03 bash[17524]: cluster 2026-03-09T14:17:52.800467+0000 mon.a (mon.0) 262 : cluster [DBG] 1: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.c 2026-03-09T14:17:53.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:53 vm03 bash[17524]: cluster 2026-03-09T14:17:52.800467+0000 mon.a (mon.0) 262 : cluster [DBG] 1: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.c 2026-03-09T14:17:53.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:53 vm03 bash[17524]: cluster 2026-03-09T14:17:52.800475+0000 mon.a (mon.0) 263 : cluster [DBG] 2: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-09T14:17:53.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:53 vm03 bash[17524]: cluster 2026-03-09T14:17:52.800475+0000 mon.a (mon.0) 263 : cluster [DBG] 2: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-09T14:17:53.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:53 vm03 bash[17524]: cluster 2026-03-09T14:17:52.800702+0000 mon.a (mon.0) 264 : cluster [DBG] fsmap 2026-03-09T14:17:53.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:53 vm03 bash[17524]: cluster 2026-03-09T14:17:52.800702+0000 mon.a (mon.0) 264 : cluster [DBG] fsmap 2026-03-09T14:17:53.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:53 vm03 bash[17524]: cluster 2026-03-09T14:17:52.800720+0000 mon.a (mon.0) 265 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T14:17:53.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:53 vm03 bash[17524]: cluster 2026-03-09T14:17:52.800720+0000 mon.a (mon.0) 265 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T14:17:53.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:53 vm03 bash[17524]: cluster 2026-03-09T14:17:52.800824+0000 mon.a (mon.0) 266 : cluster [DBG] mgrmap e12: x(active, since 53s) 2026-03-09T14:17:53.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:53 vm03 bash[17524]: cluster 2026-03-09T14:17:52.800824+0000 mon.a (mon.0) 266 : cluster [DBG] mgrmap e12: x(active, since 53s) 2026-03-09T14:17:53.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:53 vm03 bash[17524]: cluster 2026-03-09T14:17:52.800899+0000 mon.a (mon.0) 267 : cluster [INF] Health check cleared: MON_DOWN (was: 1/3 mons down, quorum a,c) 2026-03-09T14:17:53.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:53 vm03 bash[17524]: cluster 2026-03-09T14:17:52.800899+0000 mon.a (mon.0) 267 : cluster [INF] Health check cleared: MON_DOWN (was: 1/3 mons down, quorum a,c) 2026-03-09T14:17:53.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:53 vm03 bash[17524]: cluster 2026-03-09T14:17:52.800910+0000 mon.a (mon.0) 268 : cluster [INF] Cluster is now healthy 2026-03-09T14:17:53.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:53 vm03 bash[17524]: cluster 2026-03-09T14:17:52.800910+0000 mon.a (mon.0) 268 : cluster [INF] Cluster is now healthy 2026-03-09T14:17:53.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:53 vm03 bash[17524]: cluster 2026-03-09T14:17:52.803259+0000 mon.a (mon.0) 269 : cluster [INF] overall HEALTH_OK 2026-03-09T14:17:53.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:53 vm03 bash[17524]: cluster 2026-03-09T14:17:52.803259+0000 mon.a (mon.0) 269 : cluster [INF] overall HEALTH_OK 2026-03-09T14:17:54.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:54 vm04 bash[19581]: cluster 2026-03-09T14:17:52.911013+0000 mgr.x (mgr.14150) 64 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:54.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:54 vm04 bash[19581]: cluster 2026-03-09T14:17:52.911013+0000 mgr.x (mgr.14150) 64 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:54.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:54 vm04 bash[19581]: audit 2026-03-09T14:17:53.777037+0000 mon.a (mon.0) 270 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:54.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:54 vm04 bash[19581]: audit 2026-03-09T14:17:53.777037+0000 mon.a (mon.0) 270 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:54.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:54 vm05 bash[20070]: cluster 2026-03-09T14:17:52.911013+0000 mgr.x (mgr.14150) 64 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:54.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:54 vm05 bash[20070]: cluster 2026-03-09T14:17:52.911013+0000 mgr.x (mgr.14150) 64 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:54.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:54 vm05 bash[20070]: audit 2026-03-09T14:17:53.777037+0000 mon.a (mon.0) 270 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:54.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:54 vm05 bash[20070]: audit 2026-03-09T14:17:53.777037+0000 mon.a (mon.0) 270 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:54.780 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:54 vm03 bash[17524]: cluster 2026-03-09T14:17:52.911013+0000 mgr.x (mgr.14150) 64 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:54.780 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:54 vm03 bash[17524]: cluster 2026-03-09T14:17:52.911013+0000 mgr.x (mgr.14150) 64 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:54.780 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:54 vm03 bash[17524]: audit 2026-03-09T14:17:53.777037+0000 mon.a (mon.0) 270 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:54.780 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:54 vm03 bash[17524]: audit 2026-03-09T14:17:53.777037+0000 mon.a (mon.0) 270 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:17:55.057 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:17:54 vm03 bash[17796]: debug 2026-03-09T14:17:54.775+0000 7fbc035c9640 -1 mgr.server handle_report got status from non-daemon mon.b 2026-03-09T14:17:55.890 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:17:56.124 INFO:teuthology.orchestra.run.vm03.stdout:# minimal ceph.conf for 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:17:56.124 INFO:teuthology.orchestra.run.vm03.stdout:[global] 2026-03-09T14:17:56.124 INFO:teuthology.orchestra.run.vm03.stdout: fsid = 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:17:56.124 INFO:teuthology.orchestra.run.vm03.stdout: mon_host = [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] 2026-03-09T14:17:56.174 INFO:tasks.cephadm:Distributing (final) config and client.admin keyring... 2026-03-09T14:17:56.174 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T14:17:56.174 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/ceph/ceph.conf 2026-03-09T14:17:56.181 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T14:17:56.181 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:17:56.230 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T14:17:56.230 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/ceph/ceph.conf 2026-03-09T14:17:56.238 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T14:17:56.238 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:17:56.284 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T14:17:56.284 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/ceph/ceph.conf 2026-03-09T14:17:56.290 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T14:17:56.290 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:17:56.344 INFO:tasks.cephadm:Adding mgr.x on vm03 2026-03-09T14:17:56.344 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph orch apply mgr '1;vm03=x' 2026-03-09T14:17:56.504 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:56 vm03 bash[17524]: cluster 2026-03-09T14:17:54.911174+0000 mgr.x (mgr.14150) 65 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:56.504 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:56 vm03 bash[17524]: cluster 2026-03-09T14:17:54.911174+0000 mgr.x (mgr.14150) 65 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:56.505 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:56 vm03 bash[17524]: audit 2026-03-09T14:17:56.118676+0000 mon.c (mon.1) 4 : audit [DBG] from='client.? 192.168.123.103:0/1293865356' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:56.505 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:56 vm03 bash[17524]: audit 2026-03-09T14:17:56.118676+0000 mon.c (mon.1) 4 : audit [DBG] from='client.? 192.168.123.103:0/1293865356' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:56.761 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:56 vm04 bash[19581]: cluster 2026-03-09T14:17:54.911174+0000 mgr.x (mgr.14150) 65 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:56.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:56 vm04 bash[19581]: cluster 2026-03-09T14:17:54.911174+0000 mgr.x (mgr.14150) 65 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:56.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:56 vm04 bash[19581]: audit 2026-03-09T14:17:56.118676+0000 mon.c (mon.1) 4 : audit [DBG] from='client.? 192.168.123.103:0/1293865356' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:56.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:56 vm04 bash[19581]: audit 2026-03-09T14:17:56.118676+0000 mon.c (mon.1) 4 : audit [DBG] from='client.? 192.168.123.103:0/1293865356' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:56.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:56 vm05 bash[20070]: cluster 2026-03-09T14:17:54.911174+0000 mgr.x (mgr.14150) 65 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:56.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:56 vm05 bash[20070]: cluster 2026-03-09T14:17:54.911174+0000 mgr.x (mgr.14150) 65 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:56.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:56 vm05 bash[20070]: audit 2026-03-09T14:17:56.118676+0000 mon.c (mon.1) 4 : audit [DBG] from='client.? 192.168.123.103:0/1293865356' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:56.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:56 vm05 bash[20070]: audit 2026-03-09T14:17:56.118676+0000 mon.c (mon.1) 4 : audit [DBG] from='client.? 192.168.123.103:0/1293865356' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:17:58.761 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:58 vm04 bash[19581]: cluster 2026-03-09T14:17:56.911342+0000 mgr.x (mgr.14150) 66 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:58.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:17:58 vm04 bash[19581]: cluster 2026-03-09T14:17:56.911342+0000 mgr.x (mgr.14150) 66 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:58.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:58 vm05 bash[20070]: cluster 2026-03-09T14:17:56.911342+0000 mgr.x (mgr.14150) 66 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:58.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:17:58 vm05 bash[20070]: cluster 2026-03-09T14:17:56.911342+0000 mgr.x (mgr.14150) 66 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:58.806 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:58 vm03 bash[17524]: cluster 2026-03-09T14:17:56.911342+0000 mgr.x (mgr.14150) 66 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:58.806 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:17:58 vm03 bash[17524]: cluster 2026-03-09T14:17:56.911342+0000 mgr.x (mgr.14150) 66 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:17:59.998 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.c/config 2026-03-09T14:18:00.269 INFO:teuthology.orchestra.run.vm05.stdout:Scheduled mgr update... 2026-03-09T14:18:00.332 INFO:tasks.cephadm:Deploying OSDs... 2026-03-09T14:18:00.332 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T14:18:00.332 DEBUG:teuthology.orchestra.run.vm03:> dd if=/scratch_devs of=/dev/stdout 2026-03-09T14:18:00.335 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T14:18:00.336 DEBUG:teuthology.orchestra.run.vm03:> ls /dev/[sv]d? 2026-03-09T14:18:00.379 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vda 2026-03-09T14:18:00.379 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vdb 2026-03-09T14:18:00.379 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vdc 2026-03-09T14:18:00.379 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vdd 2026-03-09T14:18:00.379 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vde 2026-03-09T14:18:00.379 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-09T14:18:00.379 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-09T14:18:00.379 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vdb 2026-03-09T14:18:00.423 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vdb 2026-03-09T14:18:00.423 INFO:teuthology.orchestra.run.vm03.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T14:18:00.423 INFO:teuthology.orchestra.run.vm03.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-09T14:18:00.423 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T14:18:00.423 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-09 14:10:33.791249567 +0000 2026-03-09T14:18:00.423 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-09 14:10:32.871249567 +0000 2026-03-09T14:18:00.423 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-09 14:10:32.871249567 +0000 2026-03-09T14:18:00.423 INFO:teuthology.orchestra.run.vm03.stdout: Birth: - 2026-03-09T14:18:00.424 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-09T14:18:00.471 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-09T14:18:00.471 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-09T14:18:00.471 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.00011746 s, 4.4 MB/s 2026-03-09T14:18:00.472 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-09T14:18:00.520 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vdc 2026-03-09T14:18:00.568 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vdc 2026-03-09T14:18:00.568 INFO:teuthology.orchestra.run.vm03.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T14:18:00.568 INFO:teuthology.orchestra.run.vm03.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-09T14:18:00.568 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T14:18:00.568 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-09 14:10:33.799249567 +0000 2026-03-09T14:18:00.568 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-09 14:10:32.859249567 +0000 2026-03-09T14:18:00.568 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-09 14:10:32.859249567 +0000 2026-03-09T14:18:00.568 INFO:teuthology.orchestra.run.vm03.stdout: Birth: - 2026-03-09T14:18:00.568 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-09T14:18:00.618 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-09T14:18:00.619 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-09T14:18:00.619 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.000176942 s, 2.9 MB/s 2026-03-09T14:18:00.619 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-09T14:18:00.665 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vdd 2026-03-09T14:18:00.711 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vdd 2026-03-09T14:18:00.711 INFO:teuthology.orchestra.run.vm03.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T14:18:00.711 INFO:teuthology.orchestra.run.vm03.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-09T14:18:00.711 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T14:18:00.711 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-09 14:10:33.791249567 +0000 2026-03-09T14:18:00.711 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-09 14:10:32.823249567 +0000 2026-03-09T14:18:00.711 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-09 14:10:32.823249567 +0000 2026-03-09T14:18:00.711 INFO:teuthology.orchestra.run.vm03.stdout: Birth: - 2026-03-09T14:18:00.711 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-09T14:18:00.758 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:00 vm03 bash[17524]: cluster 2026-03-09T14:17:58.911505+0000 mgr.x (mgr.14150) 67 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:00.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:00 vm03 bash[17524]: cluster 2026-03-09T14:17:58.911505+0000 mgr.x (mgr.14150) 67 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:00.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:00 vm03 bash[17524]: audit 2026-03-09T14:18:00.265756+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:00.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:00 vm03 bash[17524]: audit 2026-03-09T14:18:00.265756+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:00.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:00 vm03 bash[17524]: audit 2026-03-09T14:18:00.266295+0000 mon.a (mon.0) 272 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:18:00.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:00 vm03 bash[17524]: audit 2026-03-09T14:18:00.266295+0000 mon.a (mon.0) 272 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:18:00.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:00 vm03 bash[17524]: audit 2026-03-09T14:18:00.267102+0000 mon.a (mon.0) 273 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:00.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:00 vm03 bash[17524]: audit 2026-03-09T14:18:00.267102+0000 mon.a (mon.0) 273 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:00.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:00 vm03 bash[17524]: audit 2026-03-09T14:18:00.267494+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:18:00.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:00 vm03 bash[17524]: audit 2026-03-09T14:18:00.267494+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:18:00.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:00 vm03 bash[17524]: audit 2026-03-09T14:18:00.270540+0000 mon.a (mon.0) 275 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:00.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:00 vm03 bash[17524]: audit 2026-03-09T14:18:00.270540+0000 mon.a (mon.0) 275 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:00.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:00 vm03 bash[17524]: audit 2026-03-09T14:18:00.273476+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:00.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:00 vm03 bash[17524]: audit 2026-03-09T14:18:00.273476+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:00.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:00 vm03 bash[17524]: audit 2026-03-09T14:18:00.289410+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:18:00.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:00 vm03 bash[17524]: audit 2026-03-09T14:18:00.289410+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:18:00.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:00 vm03 bash[17524]: audit 2026-03-09T14:18:00.289948+0000 mon.a (mon.0) 278 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T14:18:00.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:00 vm03 bash[17524]: audit 2026-03-09T14:18:00.289948+0000 mon.a (mon.0) 278 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T14:18:00.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:00 vm03 bash[17524]: audit 2026-03-09T14:18:00.290451+0000 mon.a (mon.0) 279 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:00.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:00 vm03 bash[17524]: audit 2026-03-09T14:18:00.290451+0000 mon.a (mon.0) 279 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:00.759 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-09T14:18:00.759 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-09T14:18:00.759 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.000179745 s, 2.8 MB/s 2026-03-09T14:18:00.760 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-09T14:18:00.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:00 vm05 bash[20070]: cluster 2026-03-09T14:17:58.911505+0000 mgr.x (mgr.14150) 67 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:00.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:00 vm05 bash[20070]: cluster 2026-03-09T14:17:58.911505+0000 mgr.x (mgr.14150) 67 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:00.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:00 vm05 bash[20070]: audit 2026-03-09T14:18:00.265756+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:00.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:00 vm05 bash[20070]: audit 2026-03-09T14:18:00.265756+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:00.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:00 vm05 bash[20070]: audit 2026-03-09T14:18:00.266295+0000 mon.a (mon.0) 272 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:18:00.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:00 vm05 bash[20070]: audit 2026-03-09T14:18:00.266295+0000 mon.a (mon.0) 272 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:18:00.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:00 vm05 bash[20070]: audit 2026-03-09T14:18:00.267102+0000 mon.a (mon.0) 273 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:00.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:00 vm05 bash[20070]: audit 2026-03-09T14:18:00.267102+0000 mon.a (mon.0) 273 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:00.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:00 vm05 bash[20070]: audit 2026-03-09T14:18:00.267494+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:18:00.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:00 vm05 bash[20070]: audit 2026-03-09T14:18:00.267494+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:18:00.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:00 vm05 bash[20070]: audit 2026-03-09T14:18:00.270540+0000 mon.a (mon.0) 275 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:00.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:00 vm05 bash[20070]: audit 2026-03-09T14:18:00.270540+0000 mon.a (mon.0) 275 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:00.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:00 vm05 bash[20070]: audit 2026-03-09T14:18:00.273476+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:00.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:00 vm05 bash[20070]: audit 2026-03-09T14:18:00.273476+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:00.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:00 vm05 bash[20070]: audit 2026-03-09T14:18:00.289410+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:18:00.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:00 vm05 bash[20070]: audit 2026-03-09T14:18:00.289410+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:18:00.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:00 vm05 bash[20070]: audit 2026-03-09T14:18:00.289948+0000 mon.a (mon.0) 278 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T14:18:00.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:00 vm05 bash[20070]: audit 2026-03-09T14:18:00.289948+0000 mon.a (mon.0) 278 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T14:18:00.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:00 vm05 bash[20070]: audit 2026-03-09T14:18:00.290451+0000 mon.a (mon.0) 279 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:00.764 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:00 vm05 bash[20070]: audit 2026-03-09T14:18:00.290451+0000 mon.a (mon.0) 279 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:00.804 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vde 2026-03-09T14:18:00.851 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vde 2026-03-09T14:18:00.851 INFO:teuthology.orchestra.run.vm03.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T14:18:00.851 INFO:teuthology.orchestra.run.vm03.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-09T14:18:00.851 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T14:18:00.851 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-09 14:10:33.799249567 +0000 2026-03-09T14:18:00.851 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-09 14:10:32.823249567 +0000 2026-03-09T14:18:00.851 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-09 14:10:32.823249567 +0000 2026-03-09T14:18:00.851 INFO:teuthology.orchestra.run.vm03.stdout: Birth: - 2026-03-09T14:18:00.851 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-09T14:18:00.898 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-09T14:18:00.898 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-09T14:18:00.899 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.000183623 s, 2.8 MB/s 2026-03-09T14:18:00.899 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-09T14:18:00.944 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T14:18:00.944 DEBUG:teuthology.orchestra.run.vm04:> dd if=/scratch_devs of=/dev/stdout 2026-03-09T14:18:00.947 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T14:18:00.947 DEBUG:teuthology.orchestra.run.vm04:> ls /dev/[sv]d? 2026-03-09T14:18:00.992 INFO:teuthology.orchestra.run.vm04.stdout:/dev/vda 2026-03-09T14:18:00.992 INFO:teuthology.orchestra.run.vm04.stdout:/dev/vdb 2026-03-09T14:18:00.992 INFO:teuthology.orchestra.run.vm04.stdout:/dev/vdc 2026-03-09T14:18:00.992 INFO:teuthology.orchestra.run.vm04.stdout:/dev/vdd 2026-03-09T14:18:00.992 INFO:teuthology.orchestra.run.vm04.stdout:/dev/vde 2026-03-09T14:18:00.992 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-09T14:18:00.992 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-09T14:18:00.992 DEBUG:teuthology.orchestra.run.vm04:> stat /dev/vdb 2026-03-09T14:18:01.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:00 vm04 bash[19581]: cluster 2026-03-09T14:17:58.911505+0000 mgr.x (mgr.14150) 67 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:01.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:00 vm04 bash[19581]: cluster 2026-03-09T14:17:58.911505+0000 mgr.x (mgr.14150) 67 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:01.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:00 vm04 bash[19581]: audit 2026-03-09T14:18:00.265756+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:01.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:00 vm04 bash[19581]: audit 2026-03-09T14:18:00.265756+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:01.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:00 vm04 bash[19581]: audit 2026-03-09T14:18:00.266295+0000 mon.a (mon.0) 272 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:18:01.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:00 vm04 bash[19581]: audit 2026-03-09T14:18:00.266295+0000 mon.a (mon.0) 272 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:18:01.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:00 vm04 bash[19581]: audit 2026-03-09T14:18:00.267102+0000 mon.a (mon.0) 273 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:01.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:00 vm04 bash[19581]: audit 2026-03-09T14:18:00.267102+0000 mon.a (mon.0) 273 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:01.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:00 vm04 bash[19581]: audit 2026-03-09T14:18:00.267494+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:18:01.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:00 vm04 bash[19581]: audit 2026-03-09T14:18:00.267494+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:18:01.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:00 vm04 bash[19581]: audit 2026-03-09T14:18:00.270540+0000 mon.a (mon.0) 275 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:01.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:00 vm04 bash[19581]: audit 2026-03-09T14:18:00.270540+0000 mon.a (mon.0) 275 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:01.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:00 vm04 bash[19581]: audit 2026-03-09T14:18:00.273476+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:01.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:00 vm04 bash[19581]: audit 2026-03-09T14:18:00.273476+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:01.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:00 vm04 bash[19581]: audit 2026-03-09T14:18:00.289410+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:18:01.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:00 vm04 bash[19581]: audit 2026-03-09T14:18:00.289410+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:18:01.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:00 vm04 bash[19581]: audit 2026-03-09T14:18:00.289948+0000 mon.a (mon.0) 278 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T14:18:01.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:00 vm04 bash[19581]: audit 2026-03-09T14:18:00.289948+0000 mon.a (mon.0) 278 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T14:18:01.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:00 vm04 bash[19581]: audit 2026-03-09T14:18:00.290451+0000 mon.a (mon.0) 279 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:01.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:00 vm04 bash[19581]: audit 2026-03-09T14:18:00.290451+0000 mon.a (mon.0) 279 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:01.014 INFO:teuthology.orchestra.run.vm04.stdout: File: /dev/vdb 2026-03-09T14:18:01.014 INFO:teuthology.orchestra.run.vm04.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T14:18:01.014 INFO:teuthology.orchestra.run.vm04.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-09T14:18:01.014 INFO:teuthology.orchestra.run.vm04.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T14:18:01.014 INFO:teuthology.orchestra.run.vm04.stdout:Access: 2026-03-09 14:09:37.558242417 +0000 2026-03-09T14:18:01.014 INFO:teuthology.orchestra.run.vm04.stdout:Modify: 2026-03-09 14:09:36.606242417 +0000 2026-03-09T14:18:01.014 INFO:teuthology.orchestra.run.vm04.stdout:Change: 2026-03-09 14:09:36.606242417 +0000 2026-03-09T14:18:01.014 INFO:teuthology.orchestra.run.vm04.stdout: Birth: - 2026-03-09T14:18:01.014 DEBUG:teuthology.orchestra.run.vm04:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-09T14:18:01.064 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records in 2026-03-09T14:18:01.064 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records out 2026-03-09T14:18:01.064 INFO:teuthology.orchestra.run.vm04.stderr:512 bytes copied, 0.000165581 s, 3.1 MB/s 2026-03-09T14:18:01.065 DEBUG:teuthology.orchestra.run.vm04:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-09T14:18:01.109 DEBUG:teuthology.orchestra.run.vm04:> stat /dev/vdc 2026-03-09T14:18:01.153 INFO:teuthology.orchestra.run.vm04.stdout: File: /dev/vdc 2026-03-09T14:18:01.153 INFO:teuthology.orchestra.run.vm04.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T14:18:01.153 INFO:teuthology.orchestra.run.vm04.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-09T14:18:01.153 INFO:teuthology.orchestra.run.vm04.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T14:18:01.153 INFO:teuthology.orchestra.run.vm04.stdout:Access: 2026-03-09 14:09:37.566242417 +0000 2026-03-09T14:18:01.153 INFO:teuthology.orchestra.run.vm04.stdout:Modify: 2026-03-09 14:09:36.610242417 +0000 2026-03-09T14:18:01.153 INFO:teuthology.orchestra.run.vm04.stdout:Change: 2026-03-09 14:09:36.610242417 +0000 2026-03-09T14:18:01.153 INFO:teuthology.orchestra.run.vm04.stdout: Birth: - 2026-03-09T14:18:01.153 DEBUG:teuthology.orchestra.run.vm04:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-09T14:18:01.200 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records in 2026-03-09T14:18:01.201 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records out 2026-03-09T14:18:01.201 INFO:teuthology.orchestra.run.vm04.stderr:512 bytes copied, 0.000146585 s, 3.5 MB/s 2026-03-09T14:18:01.201 DEBUG:teuthology.orchestra.run.vm04:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-09T14:18:01.246 DEBUG:teuthology.orchestra.run.vm04:> stat /dev/vdd 2026-03-09T14:18:01.293 INFO:teuthology.orchestra.run.vm04.stdout: File: /dev/vdd 2026-03-09T14:18:01.293 INFO:teuthology.orchestra.run.vm04.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T14:18:01.293 INFO:teuthology.orchestra.run.vm04.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-09T14:18:01.293 INFO:teuthology.orchestra.run.vm04.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T14:18:01.293 INFO:teuthology.orchestra.run.vm04.stdout:Access: 2026-03-09 14:09:37.558242417 +0000 2026-03-09T14:18:01.293 INFO:teuthology.orchestra.run.vm04.stdout:Modify: 2026-03-09 14:09:36.606242417 +0000 2026-03-09T14:18:01.293 INFO:teuthology.orchestra.run.vm04.stdout:Change: 2026-03-09 14:09:36.606242417 +0000 2026-03-09T14:18:01.293 INFO:teuthology.orchestra.run.vm04.stdout: Birth: - 2026-03-09T14:18:01.293 DEBUG:teuthology.orchestra.run.vm04:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-09T14:18:01.341 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records in 2026-03-09T14:18:01.341 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records out 2026-03-09T14:18:01.341 INFO:teuthology.orchestra.run.vm04.stderr:512 bytes copied, 0.000195145 s, 2.6 MB/s 2026-03-09T14:18:01.342 DEBUG:teuthology.orchestra.run.vm04:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-09T14:18:01.386 DEBUG:teuthology.orchestra.run.vm04:> stat /dev/vde 2026-03-09T14:18:01.433 INFO:teuthology.orchestra.run.vm04.stdout: File: /dev/vde 2026-03-09T14:18:01.433 INFO:teuthology.orchestra.run.vm04.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T14:18:01.433 INFO:teuthology.orchestra.run.vm04.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-09T14:18:01.433 INFO:teuthology.orchestra.run.vm04.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T14:18:01.433 INFO:teuthology.orchestra.run.vm04.stdout:Access: 2026-03-09 14:09:37.566242417 +0000 2026-03-09T14:18:01.433 INFO:teuthology.orchestra.run.vm04.stdout:Modify: 2026-03-09 14:09:36.606242417 +0000 2026-03-09T14:18:01.433 INFO:teuthology.orchestra.run.vm04.stdout:Change: 2026-03-09 14:09:36.606242417 +0000 2026-03-09T14:18:01.433 INFO:teuthology.orchestra.run.vm04.stdout: Birth: - 2026-03-09T14:18:01.433 DEBUG:teuthology.orchestra.run.vm04:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-09T14:18:01.482 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records in 2026-03-09T14:18:01.482 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records out 2026-03-09T14:18:01.482 INFO:teuthology.orchestra.run.vm04.stderr:512 bytes copied, 0.000200585 s, 2.6 MB/s 2026-03-09T14:18:01.483 DEBUG:teuthology.orchestra.run.vm04:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-09T14:18:01.534 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T14:18:01.534 DEBUG:teuthology.orchestra.run.vm05:> dd if=/scratch_devs of=/dev/stdout 2026-03-09T14:18:01.537 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T14:18:01.537 DEBUG:teuthology.orchestra.run.vm05:> ls /dev/[sv]d? 2026-03-09T14:18:01.581 INFO:teuthology.orchestra.run.vm05.stdout:/dev/vda 2026-03-09T14:18:01.581 INFO:teuthology.orchestra.run.vm05.stdout:/dev/vdb 2026-03-09T14:18:01.581 INFO:teuthology.orchestra.run.vm05.stdout:/dev/vdc 2026-03-09T14:18:01.581 INFO:teuthology.orchestra.run.vm05.stdout:/dev/vdd 2026-03-09T14:18:01.581 INFO:teuthology.orchestra.run.vm05.stdout:/dev/vde 2026-03-09T14:18:01.581 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-09T14:18:01.581 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-09T14:18:01.581 DEBUG:teuthology.orchestra.run.vm05:> stat /dev/vdb 2026-03-09T14:18:01.625 INFO:teuthology.orchestra.run.vm05.stdout: File: /dev/vdb 2026-03-09T14:18:01.625 INFO:teuthology.orchestra.run.vm05.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T14:18:01.625 INFO:teuthology.orchestra.run.vm05.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-09T14:18:01.625 INFO:teuthology.orchestra.run.vm05.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T14:18:01.625 INFO:teuthology.orchestra.run.vm05.stdout:Access: 2026-03-09 14:10:02.607922895 +0000 2026-03-09T14:18:01.625 INFO:teuthology.orchestra.run.vm05.stdout:Modify: 2026-03-09 14:10:01.687922895 +0000 2026-03-09T14:18:01.625 INFO:teuthology.orchestra.run.vm05.stdout:Change: 2026-03-09 14:10:01.687922895 +0000 2026-03-09T14:18:01.625 INFO:teuthology.orchestra.run.vm05.stdout: Birth: - 2026-03-09T14:18:01.625 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-09T14:18:01.673 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records in 2026-03-09T14:18:01.673 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records out 2026-03-09T14:18:01.673 INFO:teuthology.orchestra.run.vm05.stderr:512 bytes copied, 0.000150613 s, 3.4 MB/s 2026-03-09T14:18:01.673 DEBUG:teuthology.orchestra.run.vm05:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-09T14:18:01.718 DEBUG:teuthology.orchestra.run.vm05:> stat /dev/vdc 2026-03-09T14:18:01.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:01 vm04 bash[19581]: audit 2026-03-09T14:18:00.261422+0000 mgr.x (mgr.14150) 68 : audit [DBG] from='client.24109 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "1;vm03=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:18:01.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:01 vm04 bash[19581]: audit 2026-03-09T14:18:00.261422+0000 mgr.x (mgr.14150) 68 : audit [DBG] from='client.24109 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "1;vm03=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:18:01.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:01 vm04 bash[19581]: cephadm 2026-03-09T14:18:00.262295+0000 mgr.x (mgr.14150) 69 : cephadm [INF] Saving service mgr spec with placement vm03=x;count:1 2026-03-09T14:18:01.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:01 vm04 bash[19581]: cephadm 2026-03-09T14:18:00.262295+0000 mgr.x (mgr.14150) 69 : cephadm [INF] Saving service mgr spec with placement vm03=x;count:1 2026-03-09T14:18:01.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:01 vm04 bash[19581]: cephadm 2026-03-09T14:18:00.289191+0000 mgr.x (mgr.14150) 70 : cephadm [INF] Reconfiguring mgr.x (unknown last config time)... 2026-03-09T14:18:01.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:01 vm04 bash[19581]: cephadm 2026-03-09T14:18:00.289191+0000 mgr.x (mgr.14150) 70 : cephadm [INF] Reconfiguring mgr.x (unknown last config time)... 2026-03-09T14:18:01.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:01 vm04 bash[19581]: cephadm 2026-03-09T14:18:00.291064+0000 mgr.x (mgr.14150) 71 : cephadm [INF] Reconfiguring daemon mgr.x on vm03 2026-03-09T14:18:01.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:01 vm04 bash[19581]: cephadm 2026-03-09T14:18:00.291064+0000 mgr.x (mgr.14150) 71 : cephadm [INF] Reconfiguring daemon mgr.x on vm03 2026-03-09T14:18:01.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:01 vm04 bash[19581]: audit 2026-03-09T14:18:00.678783+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:01.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:01 vm04 bash[19581]: audit 2026-03-09T14:18:00.678783+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:01.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:01 vm04 bash[19581]: audit 2026-03-09T14:18:00.682892+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:01.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:01 vm04 bash[19581]: audit 2026-03-09T14:18:00.682892+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:01.765 INFO:teuthology.orchestra.run.vm05.stdout: File: /dev/vdc 2026-03-09T14:18:01.765 INFO:teuthology.orchestra.run.vm05.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T14:18:01.765 INFO:teuthology.orchestra.run.vm05.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-09T14:18:01.765 INFO:teuthology.orchestra.run.vm05.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T14:18:01.765 INFO:teuthology.orchestra.run.vm05.stdout:Access: 2026-03-09 14:10:02.615922895 +0000 2026-03-09T14:18:01.765 INFO:teuthology.orchestra.run.vm05.stdout:Modify: 2026-03-09 14:10:01.687922895 +0000 2026-03-09T14:18:01.765 INFO:teuthology.orchestra.run.vm05.stdout:Change: 2026-03-09 14:10:01.687922895 +0000 2026-03-09T14:18:01.765 INFO:teuthology.orchestra.run.vm05.stdout: Birth: - 2026-03-09T14:18:01.765 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-09T14:18:01.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:01 vm03 bash[17524]: audit 2026-03-09T14:18:00.261422+0000 mgr.x (mgr.14150) 68 : audit [DBG] from='client.24109 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "1;vm03=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:18:01.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:01 vm03 bash[17524]: audit 2026-03-09T14:18:00.261422+0000 mgr.x (mgr.14150) 68 : audit [DBG] from='client.24109 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "1;vm03=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:18:01.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:01 vm03 bash[17524]: cephadm 2026-03-09T14:18:00.262295+0000 mgr.x (mgr.14150) 69 : cephadm [INF] Saving service mgr spec with placement vm03=x;count:1 2026-03-09T14:18:01.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:01 vm03 bash[17524]: cephadm 2026-03-09T14:18:00.262295+0000 mgr.x (mgr.14150) 69 : cephadm [INF] Saving service mgr spec with placement vm03=x;count:1 2026-03-09T14:18:01.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:01 vm03 bash[17524]: cephadm 2026-03-09T14:18:00.289191+0000 mgr.x (mgr.14150) 70 : cephadm [INF] Reconfiguring mgr.x (unknown last config time)... 2026-03-09T14:18:01.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:01 vm03 bash[17524]: cephadm 2026-03-09T14:18:00.289191+0000 mgr.x (mgr.14150) 70 : cephadm [INF] Reconfiguring mgr.x (unknown last config time)... 2026-03-09T14:18:01.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:01 vm03 bash[17524]: cephadm 2026-03-09T14:18:00.291064+0000 mgr.x (mgr.14150) 71 : cephadm [INF] Reconfiguring daemon mgr.x on vm03 2026-03-09T14:18:01.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:01 vm03 bash[17524]: cephadm 2026-03-09T14:18:00.291064+0000 mgr.x (mgr.14150) 71 : cephadm [INF] Reconfiguring daemon mgr.x on vm03 2026-03-09T14:18:01.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:01 vm03 bash[17524]: audit 2026-03-09T14:18:00.678783+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:01.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:01 vm03 bash[17524]: audit 2026-03-09T14:18:00.678783+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:01.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:01 vm03 bash[17524]: audit 2026-03-09T14:18:00.682892+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:01.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:01 vm03 bash[17524]: audit 2026-03-09T14:18:00.682892+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:01.812 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:01 vm05 bash[20070]: audit 2026-03-09T14:18:00.261422+0000 mgr.x (mgr.14150) 68 : audit [DBG] from='client.24109 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "1;vm03=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:18:01.812 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:01 vm05 bash[20070]: audit 2026-03-09T14:18:00.261422+0000 mgr.x (mgr.14150) 68 : audit [DBG] from='client.24109 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "1;vm03=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:18:01.812 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:01 vm05 bash[20070]: cephadm 2026-03-09T14:18:00.262295+0000 mgr.x (mgr.14150) 69 : cephadm [INF] Saving service mgr spec with placement vm03=x;count:1 2026-03-09T14:18:01.812 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:01 vm05 bash[20070]: cephadm 2026-03-09T14:18:00.262295+0000 mgr.x (mgr.14150) 69 : cephadm [INF] Saving service mgr spec with placement vm03=x;count:1 2026-03-09T14:18:01.812 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:01 vm05 bash[20070]: cephadm 2026-03-09T14:18:00.289191+0000 mgr.x (mgr.14150) 70 : cephadm [INF] Reconfiguring mgr.x (unknown last config time)... 2026-03-09T14:18:01.812 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:01 vm05 bash[20070]: cephadm 2026-03-09T14:18:00.289191+0000 mgr.x (mgr.14150) 70 : cephadm [INF] Reconfiguring mgr.x (unknown last config time)... 2026-03-09T14:18:01.812 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:01 vm05 bash[20070]: cephadm 2026-03-09T14:18:00.291064+0000 mgr.x (mgr.14150) 71 : cephadm [INF] Reconfiguring daemon mgr.x on vm03 2026-03-09T14:18:01.812 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:01 vm05 bash[20070]: cephadm 2026-03-09T14:18:00.291064+0000 mgr.x (mgr.14150) 71 : cephadm [INF] Reconfiguring daemon mgr.x on vm03 2026-03-09T14:18:01.812 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:01 vm05 bash[20070]: audit 2026-03-09T14:18:00.678783+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:01.812 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:01 vm05 bash[20070]: audit 2026-03-09T14:18:00.678783+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:01.812 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:01 vm05 bash[20070]: audit 2026-03-09T14:18:00.682892+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:01.812 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:01 vm05 bash[20070]: audit 2026-03-09T14:18:00.682892+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:01.813 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records in 2026-03-09T14:18:01.813 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records out 2026-03-09T14:18:01.813 INFO:teuthology.orchestra.run.vm05.stderr:512 bytes copied, 0.000137798 s, 3.7 MB/s 2026-03-09T14:18:01.813 DEBUG:teuthology.orchestra.run.vm05:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-09T14:18:01.859 DEBUG:teuthology.orchestra.run.vm05:> stat /dev/vdd 2026-03-09T14:18:01.905 INFO:teuthology.orchestra.run.vm05.stdout: File: /dev/vdd 2026-03-09T14:18:01.905 INFO:teuthology.orchestra.run.vm05.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T14:18:01.905 INFO:teuthology.orchestra.run.vm05.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-09T14:18:01.905 INFO:teuthology.orchestra.run.vm05.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T14:18:01.905 INFO:teuthology.orchestra.run.vm05.stdout:Access: 2026-03-09 14:10:02.603922895 +0000 2026-03-09T14:18:01.905 INFO:teuthology.orchestra.run.vm05.stdout:Modify: 2026-03-09 14:10:01.687922895 +0000 2026-03-09T14:18:01.905 INFO:teuthology.orchestra.run.vm05.stdout:Change: 2026-03-09 14:10:01.687922895 +0000 2026-03-09T14:18:01.905 INFO:teuthology.orchestra.run.vm05.stdout: Birth: - 2026-03-09T14:18:01.905 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-09T14:18:01.952 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records in 2026-03-09T14:18:01.952 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records out 2026-03-09T14:18:01.952 INFO:teuthology.orchestra.run.vm05.stderr:512 bytes copied, 0.000156412 s, 3.3 MB/s 2026-03-09T14:18:01.953 DEBUG:teuthology.orchestra.run.vm05:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-09T14:18:01.998 DEBUG:teuthology.orchestra.run.vm05:> stat /dev/vde 2026-03-09T14:18:02.041 INFO:teuthology.orchestra.run.vm05.stdout: File: /dev/vde 2026-03-09T14:18:02.041 INFO:teuthology.orchestra.run.vm05.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T14:18:02.041 INFO:teuthology.orchestra.run.vm05.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-09T14:18:02.041 INFO:teuthology.orchestra.run.vm05.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T14:18:02.041 INFO:teuthology.orchestra.run.vm05.stdout:Access: 2026-03-09 14:10:02.611922895 +0000 2026-03-09T14:18:02.041 INFO:teuthology.orchestra.run.vm05.stdout:Modify: 2026-03-09 14:10:01.687922895 +0000 2026-03-09T14:18:02.041 INFO:teuthology.orchestra.run.vm05.stdout:Change: 2026-03-09 14:10:01.687922895 +0000 2026-03-09T14:18:02.041 INFO:teuthology.orchestra.run.vm05.stdout: Birth: - 2026-03-09T14:18:02.041 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-09T14:18:02.088 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records in 2026-03-09T14:18:02.088 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records out 2026-03-09T14:18:02.088 INFO:teuthology.orchestra.run.vm05.stderr:512 bytes copied, 0.000139861 s, 3.7 MB/s 2026-03-09T14:18:02.089 DEBUG:teuthology.orchestra.run.vm05:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-09T14:18:02.134 INFO:tasks.cephadm:Deploying osd.0 on vm03 with /dev/vde... 2026-03-09T14:18:02.134 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- lvm zap /dev/vde 2026-03-09T14:18:02.519 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:02 vm03 bash[17524]: cluster 2026-03-09T14:18:00.911696+0000 mgr.x (mgr.14150) 72 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:02.519 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:02 vm03 bash[17524]: cluster 2026-03-09T14:18:00.911696+0000 mgr.x (mgr.14150) 72 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:03.011 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:02 vm04 bash[19581]: cluster 2026-03-09T14:18:00.911696+0000 mgr.x (mgr.14150) 72 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:03.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:02 vm04 bash[19581]: cluster 2026-03-09T14:18:00.911696+0000 mgr.x (mgr.14150) 72 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:03.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:02 vm05 bash[20070]: cluster 2026-03-09T14:18:00.911696+0000 mgr.x (mgr.14150) 72 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:03.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:02 vm05 bash[20070]: cluster 2026-03-09T14:18:00.911696+0000 mgr.x (mgr.14150) 72 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:04.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:04 vm03 bash[17524]: cluster 2026-03-09T14:18:02.911893+0000 mgr.x (mgr.14150) 73 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:04.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:04 vm03 bash[17524]: cluster 2026-03-09T14:18:02.911893+0000 mgr.x (mgr.14150) 73 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:05.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:04 vm04 bash[19581]: cluster 2026-03-09T14:18:02.911893+0000 mgr.x (mgr.14150) 73 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:05.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:04 vm04 bash[19581]: cluster 2026-03-09T14:18:02.911893+0000 mgr.x (mgr.14150) 73 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:05.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:04 vm05 bash[20070]: cluster 2026-03-09T14:18:02.911893+0000 mgr.x (mgr.14150) 73 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:05.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:04 vm05 bash[20070]: cluster 2026-03-09T14:18:02.911893+0000 mgr.x (mgr.14150) 73 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:05.806 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:05 vm03 bash[17524]: cluster 2026-03-09T14:18:04.912071+0000 mgr.x (mgr.14150) 74 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:05.806 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:05 vm03 bash[17524]: cluster 2026-03-09T14:18:04.912071+0000 mgr.x (mgr.14150) 74 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:06.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:05 vm04 bash[19581]: cluster 2026-03-09T14:18:04.912071+0000 mgr.x (mgr.14150) 74 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:06.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:05 vm04 bash[19581]: cluster 2026-03-09T14:18:04.912071+0000 mgr.x (mgr.14150) 74 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:06.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:05 vm05 bash[20070]: cluster 2026-03-09T14:18:04.912071+0000 mgr.x (mgr.14150) 74 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:06.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:05 vm05 bash[20070]: cluster 2026-03-09T14:18:04.912071+0000 mgr.x (mgr.14150) 74 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:06.740 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:18:07.581 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:18:07.594 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph orch daemon add osd vm03:/dev/vde 2026-03-09T14:18:07.972 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:07 vm03 bash[17524]: cluster 2026-03-09T14:18:06.912278+0000 mgr.x (mgr.14150) 75 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:07.972 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:07 vm03 bash[17524]: cluster 2026-03-09T14:18:06.912278+0000 mgr.x (mgr.14150) 75 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:08.261 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:07 vm04 bash[19581]: cluster 2026-03-09T14:18:06.912278+0000 mgr.x (mgr.14150) 75 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:08.261 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:07 vm04 bash[19581]: cluster 2026-03-09T14:18:06.912278+0000 mgr.x (mgr.14150) 75 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:08.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:07 vm05 bash[20070]: cluster 2026-03-09T14:18:06.912278+0000 mgr.x (mgr.14150) 75 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:08.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:07 vm05 bash[20070]: cluster 2026-03-09T14:18:06.912278+0000 mgr.x (mgr.14150) 75 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:10.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:10 vm03 bash[17524]: cluster 2026-03-09T14:18:08.912478+0000 mgr.x (mgr.14150) 76 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:10.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:10 vm03 bash[17524]: cluster 2026-03-09T14:18:08.912478+0000 mgr.x (mgr.14150) 76 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:10.511 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:10 vm04 bash[19581]: cluster 2026-03-09T14:18:08.912478+0000 mgr.x (mgr.14150) 76 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:10.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:10 vm04 bash[19581]: cluster 2026-03-09T14:18:08.912478+0000 mgr.x (mgr.14150) 76 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:10.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:10 vm05 bash[20070]: cluster 2026-03-09T14:18:08.912478+0000 mgr.x (mgr.14150) 76 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:10.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:10 vm05 bash[20070]: cluster 2026-03-09T14:18:08.912478+0000 mgr.x (mgr.14150) 76 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:12.214 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:18:12.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:12 vm04 bash[19581]: cluster 2026-03-09T14:18:10.912673+0000 mgr.x (mgr.14150) 77 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:12.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:12 vm04 bash[19581]: cluster 2026-03-09T14:18:10.912673+0000 mgr.x (mgr.14150) 77 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:12.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:12 vm05 bash[20070]: cluster 2026-03-09T14:18:10.912673+0000 mgr.x (mgr.14150) 77 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:12.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:12 vm05 bash[20070]: cluster 2026-03-09T14:18:10.912673+0000 mgr.x (mgr.14150) 77 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:12.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:12 vm03 bash[17524]: cluster 2026-03-09T14:18:10.912673+0000 mgr.x (mgr.14150) 77 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:12.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:12 vm03 bash[17524]: cluster 2026-03-09T14:18:10.912673+0000 mgr.x (mgr.14150) 77 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:13.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:13 vm03 bash[17524]: audit 2026-03-09T14:18:12.465678+0000 mgr.x (mgr.14150) 78 : audit [DBG] from='client.24115 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:18:13.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:13 vm03 bash[17524]: audit 2026-03-09T14:18:12.465678+0000 mgr.x (mgr.14150) 78 : audit [DBG] from='client.24115 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:18:13.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:13 vm03 bash[17524]: audit 2026-03-09T14:18:12.466984+0000 mon.a (mon.0) 282 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:18:13.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:13 vm03 bash[17524]: audit 2026-03-09T14:18:12.466984+0000 mon.a (mon.0) 282 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:18:13.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:13 vm03 bash[17524]: audit 2026-03-09T14:18:12.468350+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:18:13.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:13 vm03 bash[17524]: audit 2026-03-09T14:18:12.468350+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:18:13.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:13 vm03 bash[17524]: audit 2026-03-09T14:18:12.468773+0000 mon.a (mon.0) 284 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:13.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:13 vm03 bash[17524]: audit 2026-03-09T14:18:12.468773+0000 mon.a (mon.0) 284 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:14.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:13 vm04 bash[19581]: audit 2026-03-09T14:18:12.465678+0000 mgr.x (mgr.14150) 78 : audit [DBG] from='client.24115 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:18:14.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:13 vm04 bash[19581]: audit 2026-03-09T14:18:12.465678+0000 mgr.x (mgr.14150) 78 : audit [DBG] from='client.24115 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:18:14.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:13 vm04 bash[19581]: audit 2026-03-09T14:18:12.466984+0000 mon.a (mon.0) 282 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:18:14.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:13 vm04 bash[19581]: audit 2026-03-09T14:18:12.466984+0000 mon.a (mon.0) 282 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:18:14.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:13 vm04 bash[19581]: audit 2026-03-09T14:18:12.468350+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:18:14.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:13 vm04 bash[19581]: audit 2026-03-09T14:18:12.468350+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:18:14.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:13 vm04 bash[19581]: audit 2026-03-09T14:18:12.468773+0000 mon.a (mon.0) 284 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:14.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:13 vm04 bash[19581]: audit 2026-03-09T14:18:12.468773+0000 mon.a (mon.0) 284 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:14.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:13 vm05 bash[20070]: audit 2026-03-09T14:18:12.465678+0000 mgr.x (mgr.14150) 78 : audit [DBG] from='client.24115 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:18:14.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:13 vm05 bash[20070]: audit 2026-03-09T14:18:12.465678+0000 mgr.x (mgr.14150) 78 : audit [DBG] from='client.24115 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:18:14.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:13 vm05 bash[20070]: audit 2026-03-09T14:18:12.466984+0000 mon.a (mon.0) 282 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:18:14.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:13 vm05 bash[20070]: audit 2026-03-09T14:18:12.466984+0000 mon.a (mon.0) 282 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:18:14.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:13 vm05 bash[20070]: audit 2026-03-09T14:18:12.468350+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:18:14.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:13 vm05 bash[20070]: audit 2026-03-09T14:18:12.468350+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:18:14.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:13 vm05 bash[20070]: audit 2026-03-09T14:18:12.468773+0000 mon.a (mon.0) 284 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:14.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:13 vm05 bash[20070]: audit 2026-03-09T14:18:12.468773+0000 mon.a (mon.0) 284 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:14.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:14 vm03 bash[17524]: cluster 2026-03-09T14:18:12.912842+0000 mgr.x (mgr.14150) 79 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:14.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:14 vm03 bash[17524]: cluster 2026-03-09T14:18:12.912842+0000 mgr.x (mgr.14150) 79 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:15.011 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:14 vm04 bash[19581]: cluster 2026-03-09T14:18:12.912842+0000 mgr.x (mgr.14150) 79 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:15.011 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:14 vm04 bash[19581]: cluster 2026-03-09T14:18:12.912842+0000 mgr.x (mgr.14150) 79 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:15.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:14 vm05 bash[20070]: cluster 2026-03-09T14:18:12.912842+0000 mgr.x (mgr.14150) 79 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:15.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:14 vm05 bash[20070]: cluster 2026-03-09T14:18:12.912842+0000 mgr.x (mgr.14150) 79 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:17.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:16 vm04 bash[19581]: cluster 2026-03-09T14:18:14.913062+0000 mgr.x (mgr.14150) 80 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:17.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:16 vm04 bash[19581]: cluster 2026-03-09T14:18:14.913062+0000 mgr.x (mgr.14150) 80 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:17.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:16 vm05 bash[20070]: cluster 2026-03-09T14:18:14.913062+0000 mgr.x (mgr.14150) 80 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:17.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:16 vm05 bash[20070]: cluster 2026-03-09T14:18:14.913062+0000 mgr.x (mgr.14150) 80 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:17.056 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:16 vm03 bash[17524]: cluster 2026-03-09T14:18:14.913062+0000 mgr.x (mgr.14150) 80 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:17.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:16 vm03 bash[17524]: cluster 2026-03-09T14:18:14.913062+0000 mgr.x (mgr.14150) 80 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:18.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:17 vm04 bash[19581]: cluster 2026-03-09T14:18:16.913278+0000 mgr.x (mgr.14150) 81 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:18.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:17 vm04 bash[19581]: cluster 2026-03-09T14:18:16.913278+0000 mgr.x (mgr.14150) 81 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:18.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:17 vm04 bash[19581]: audit 2026-03-09T14:18:17.290506+0000 mon.c (mon.1) 5 : audit [INF] from='client.? 192.168.123.103:0/2939656946' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6f17c91b-de65-4e8c-9e74-a512b4d9d1c9"}]: dispatch 2026-03-09T14:18:18.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:17 vm04 bash[19581]: audit 2026-03-09T14:18:17.290506+0000 mon.c (mon.1) 5 : audit [INF] from='client.? 192.168.123.103:0/2939656946' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6f17c91b-de65-4e8c-9e74-a512b4d9d1c9"}]: dispatch 2026-03-09T14:18:18.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:17 vm04 bash[19581]: audit 2026-03-09T14:18:17.292479+0000 mon.a (mon.0) 285 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6f17c91b-de65-4e8c-9e74-a512b4d9d1c9"}]: dispatch 2026-03-09T14:18:18.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:17 vm04 bash[19581]: audit 2026-03-09T14:18:17.292479+0000 mon.a (mon.0) 285 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6f17c91b-de65-4e8c-9e74-a512b4d9d1c9"}]: dispatch 2026-03-09T14:18:18.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:17 vm04 bash[19581]: audit 2026-03-09T14:18:17.295082+0000 mon.a (mon.0) 286 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "6f17c91b-de65-4e8c-9e74-a512b4d9d1c9"}]': finished 2026-03-09T14:18:18.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:17 vm04 bash[19581]: audit 2026-03-09T14:18:17.295082+0000 mon.a (mon.0) 286 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "6f17c91b-de65-4e8c-9e74-a512b4d9d1c9"}]': finished 2026-03-09T14:18:18.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:17 vm04 bash[19581]: cluster 2026-03-09T14:18:17.296980+0000 mon.a (mon.0) 287 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-09T14:18:18.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:17 vm04 bash[19581]: cluster 2026-03-09T14:18:17.296980+0000 mon.a (mon.0) 287 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-09T14:18:18.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:17 vm04 bash[19581]: audit 2026-03-09T14:18:17.297068+0000 mon.a (mon.0) 288 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:18:18.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:17 vm04 bash[19581]: audit 2026-03-09T14:18:17.297068+0000 mon.a (mon.0) 288 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:18:18.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:17 vm05 bash[20070]: cluster 2026-03-09T14:18:16.913278+0000 mgr.x (mgr.14150) 81 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:18.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:17 vm05 bash[20070]: cluster 2026-03-09T14:18:16.913278+0000 mgr.x (mgr.14150) 81 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:18.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:17 vm05 bash[20070]: audit 2026-03-09T14:18:17.290506+0000 mon.c (mon.1) 5 : audit [INF] from='client.? 192.168.123.103:0/2939656946' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6f17c91b-de65-4e8c-9e74-a512b4d9d1c9"}]: dispatch 2026-03-09T14:18:18.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:17 vm05 bash[20070]: audit 2026-03-09T14:18:17.290506+0000 mon.c (mon.1) 5 : audit [INF] from='client.? 192.168.123.103:0/2939656946' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6f17c91b-de65-4e8c-9e74-a512b4d9d1c9"}]: dispatch 2026-03-09T14:18:18.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:17 vm05 bash[20070]: audit 2026-03-09T14:18:17.292479+0000 mon.a (mon.0) 285 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6f17c91b-de65-4e8c-9e74-a512b4d9d1c9"}]: dispatch 2026-03-09T14:18:18.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:17 vm05 bash[20070]: audit 2026-03-09T14:18:17.292479+0000 mon.a (mon.0) 285 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6f17c91b-de65-4e8c-9e74-a512b4d9d1c9"}]: dispatch 2026-03-09T14:18:18.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:17 vm05 bash[20070]: audit 2026-03-09T14:18:17.295082+0000 mon.a (mon.0) 286 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "6f17c91b-de65-4e8c-9e74-a512b4d9d1c9"}]': finished 2026-03-09T14:18:18.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:17 vm05 bash[20070]: audit 2026-03-09T14:18:17.295082+0000 mon.a (mon.0) 286 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "6f17c91b-de65-4e8c-9e74-a512b4d9d1c9"}]': finished 2026-03-09T14:18:18.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:17 vm05 bash[20070]: cluster 2026-03-09T14:18:17.296980+0000 mon.a (mon.0) 287 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-09T14:18:18.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:17 vm05 bash[20070]: cluster 2026-03-09T14:18:17.296980+0000 mon.a (mon.0) 287 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-09T14:18:18.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:17 vm05 bash[20070]: audit 2026-03-09T14:18:17.297068+0000 mon.a (mon.0) 288 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:18:18.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:17 vm05 bash[20070]: audit 2026-03-09T14:18:17.297068+0000 mon.a (mon.0) 288 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:18:18.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:17 vm03 bash[17524]: cluster 2026-03-09T14:18:16.913278+0000 mgr.x (mgr.14150) 81 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:18.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:17 vm03 bash[17524]: cluster 2026-03-09T14:18:16.913278+0000 mgr.x (mgr.14150) 81 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:18.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:17 vm03 bash[17524]: audit 2026-03-09T14:18:17.290506+0000 mon.c (mon.1) 5 : audit [INF] from='client.? 192.168.123.103:0/2939656946' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6f17c91b-de65-4e8c-9e74-a512b4d9d1c9"}]: dispatch 2026-03-09T14:18:18.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:17 vm03 bash[17524]: audit 2026-03-09T14:18:17.290506+0000 mon.c (mon.1) 5 : audit [INF] from='client.? 192.168.123.103:0/2939656946' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6f17c91b-de65-4e8c-9e74-a512b4d9d1c9"}]: dispatch 2026-03-09T14:18:18.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:17 vm03 bash[17524]: audit 2026-03-09T14:18:17.292479+0000 mon.a (mon.0) 285 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6f17c91b-de65-4e8c-9e74-a512b4d9d1c9"}]: dispatch 2026-03-09T14:18:18.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:17 vm03 bash[17524]: audit 2026-03-09T14:18:17.292479+0000 mon.a (mon.0) 285 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6f17c91b-de65-4e8c-9e74-a512b4d9d1c9"}]: dispatch 2026-03-09T14:18:18.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:17 vm03 bash[17524]: audit 2026-03-09T14:18:17.295082+0000 mon.a (mon.0) 286 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "6f17c91b-de65-4e8c-9e74-a512b4d9d1c9"}]': finished 2026-03-09T14:18:18.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:17 vm03 bash[17524]: audit 2026-03-09T14:18:17.295082+0000 mon.a (mon.0) 286 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "6f17c91b-de65-4e8c-9e74-a512b4d9d1c9"}]': finished 2026-03-09T14:18:18.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:17 vm03 bash[17524]: cluster 2026-03-09T14:18:17.296980+0000 mon.a (mon.0) 287 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-09T14:18:18.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:17 vm03 bash[17524]: cluster 2026-03-09T14:18:17.296980+0000 mon.a (mon.0) 287 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-09T14:18:18.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:17 vm03 bash[17524]: audit 2026-03-09T14:18:17.297068+0000 mon.a (mon.0) 288 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:18:18.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:17 vm03 bash[17524]: audit 2026-03-09T14:18:17.297068+0000 mon.a (mon.0) 288 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:18:19.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:18 vm04 bash[19581]: audit 2026-03-09T14:18:17.883896+0000 mon.b (mon.2) 3 : audit [DBG] from='client.? 192.168.123.103:0/271433273' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:18:19.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:18 vm04 bash[19581]: audit 2026-03-09T14:18:17.883896+0000 mon.b (mon.2) 3 : audit [DBG] from='client.? 192.168.123.103:0/271433273' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:18:19.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:18 vm05 bash[20070]: audit 2026-03-09T14:18:17.883896+0000 mon.b (mon.2) 3 : audit [DBG] from='client.? 192.168.123.103:0/271433273' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:18:19.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:18 vm05 bash[20070]: audit 2026-03-09T14:18:17.883896+0000 mon.b (mon.2) 3 : audit [DBG] from='client.? 192.168.123.103:0/271433273' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:18:19.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:18 vm03 bash[17524]: audit 2026-03-09T14:18:17.883896+0000 mon.b (mon.2) 3 : audit [DBG] from='client.? 192.168.123.103:0/271433273' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:18:19.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:18 vm03 bash[17524]: audit 2026-03-09T14:18:17.883896+0000 mon.b (mon.2) 3 : audit [DBG] from='client.? 192.168.123.103:0/271433273' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:18:20.011 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:19 vm04 bash[19581]: cluster 2026-03-09T14:18:18.913456+0000 mgr.x (mgr.14150) 82 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:20.011 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:19 vm04 bash[19581]: cluster 2026-03-09T14:18:18.913456+0000 mgr.x (mgr.14150) 82 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:20.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:19 vm05 bash[20070]: cluster 2026-03-09T14:18:18.913456+0000 mgr.x (mgr.14150) 82 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:20.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:19 vm05 bash[20070]: cluster 2026-03-09T14:18:18.913456+0000 mgr.x (mgr.14150) 82 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:20.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:19 vm03 bash[17524]: cluster 2026-03-09T14:18:18.913456+0000 mgr.x (mgr.14150) 82 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:20.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:19 vm03 bash[17524]: cluster 2026-03-09T14:18:18.913456+0000 mgr.x (mgr.14150) 82 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:22.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:21 vm04 bash[19581]: cluster 2026-03-09T14:18:20.913649+0000 mgr.x (mgr.14150) 83 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:22.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:21 vm04 bash[19581]: cluster 2026-03-09T14:18:20.913649+0000 mgr.x (mgr.14150) 83 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:22.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:21 vm05 bash[20070]: cluster 2026-03-09T14:18:20.913649+0000 mgr.x (mgr.14150) 83 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:22.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:21 vm05 bash[20070]: cluster 2026-03-09T14:18:20.913649+0000 mgr.x (mgr.14150) 83 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:22.306 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:21 vm03 bash[17524]: cluster 2026-03-09T14:18:20.913649+0000 mgr.x (mgr.14150) 83 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:22.306 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:21 vm03 bash[17524]: cluster 2026-03-09T14:18:20.913649+0000 mgr.x (mgr.14150) 83 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:24.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:23 vm04 bash[19581]: cluster 2026-03-09T14:18:22.913850+0000 mgr.x (mgr.14150) 84 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:24.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:23 vm04 bash[19581]: cluster 2026-03-09T14:18:22.913850+0000 mgr.x (mgr.14150) 84 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:24.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:23 vm05 bash[20070]: cluster 2026-03-09T14:18:22.913850+0000 mgr.x (mgr.14150) 84 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:24.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:23 vm05 bash[20070]: cluster 2026-03-09T14:18:22.913850+0000 mgr.x (mgr.14150) 84 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:24.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:23 vm03 bash[17524]: cluster 2026-03-09T14:18:22.913850+0000 mgr.x (mgr.14150) 84 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:24.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:23 vm03 bash[17524]: cluster 2026-03-09T14:18:22.913850+0000 mgr.x (mgr.14150) 84 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:26.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:25 vm04 bash[19581]: cluster 2026-03-09T14:18:24.914016+0000 mgr.x (mgr.14150) 85 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:26.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:25 vm04 bash[19581]: cluster 2026-03-09T14:18:24.914016+0000 mgr.x (mgr.14150) 85 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:26.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:25 vm05 bash[20070]: cluster 2026-03-09T14:18:24.914016+0000 mgr.x (mgr.14150) 85 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:26.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:25 vm05 bash[20070]: cluster 2026-03-09T14:18:24.914016+0000 mgr.x (mgr.14150) 85 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:26.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:25 vm03 bash[17524]: cluster 2026-03-09T14:18:24.914016+0000 mgr.x (mgr.14150) 85 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:26.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:25 vm03 bash[17524]: cluster 2026-03-09T14:18:24.914016+0000 mgr.x (mgr.14150) 85 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:27.228 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:18:27 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:18:27.228 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:26 vm03 bash[17524]: audit 2026-03-09T14:18:26.371512+0000 mon.a (mon.0) 289 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T14:18:27.228 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:26 vm03 bash[17524]: audit 2026-03-09T14:18:26.371512+0000 mon.a (mon.0) 289 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T14:18:27.228 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:26 vm03 bash[17524]: audit 2026-03-09T14:18:26.371959+0000 mon.a (mon.0) 290 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:27.228 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:26 vm03 bash[17524]: audit 2026-03-09T14:18:26.371959+0000 mon.a (mon.0) 290 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:27.228 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:26 vm03 bash[17524]: cephadm 2026-03-09T14:18:26.372295+0000 mgr.x (mgr.14150) 86 : cephadm [INF] Deploying daemon osd.0 on vm03 2026-03-09T14:18:27.228 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:26 vm03 bash[17524]: cephadm 2026-03-09T14:18:26.372295+0000 mgr.x (mgr.14150) 86 : cephadm [INF] Deploying daemon osd.0 on vm03 2026-03-09T14:18:27.228 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:27 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:18:27.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:26 vm04 bash[19581]: audit 2026-03-09T14:18:26.371512+0000 mon.a (mon.0) 289 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T14:18:27.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:26 vm04 bash[19581]: audit 2026-03-09T14:18:26.371512+0000 mon.a (mon.0) 289 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T14:18:27.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:26 vm04 bash[19581]: audit 2026-03-09T14:18:26.371959+0000 mon.a (mon.0) 290 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:27.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:26 vm04 bash[19581]: audit 2026-03-09T14:18:26.371959+0000 mon.a (mon.0) 290 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:27.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:26 vm04 bash[19581]: cephadm 2026-03-09T14:18:26.372295+0000 mgr.x (mgr.14150) 86 : cephadm [INF] Deploying daemon osd.0 on vm03 2026-03-09T14:18:27.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:26 vm04 bash[19581]: cephadm 2026-03-09T14:18:26.372295+0000 mgr.x (mgr.14150) 86 : cephadm [INF] Deploying daemon osd.0 on vm03 2026-03-09T14:18:27.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:26 vm05 bash[20070]: audit 2026-03-09T14:18:26.371512+0000 mon.a (mon.0) 289 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T14:18:27.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:26 vm05 bash[20070]: audit 2026-03-09T14:18:26.371512+0000 mon.a (mon.0) 289 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T14:18:27.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:26 vm05 bash[20070]: audit 2026-03-09T14:18:26.371959+0000 mon.a (mon.0) 290 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:27.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:26 vm05 bash[20070]: audit 2026-03-09T14:18:26.371959+0000 mon.a (mon.0) 290 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:27.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:26 vm05 bash[20070]: cephadm 2026-03-09T14:18:26.372295+0000 mgr.x (mgr.14150) 86 : cephadm [INF] Deploying daemon osd.0 on vm03 2026-03-09T14:18:27.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:26 vm05 bash[20070]: cephadm 2026-03-09T14:18:26.372295+0000 mgr.x (mgr.14150) 86 : cephadm [INF] Deploying daemon osd.0 on vm03 2026-03-09T14:18:27.556 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:18:27 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:18:27.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:27 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:18:28.261 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:27 vm04 bash[19581]: cluster 2026-03-09T14:18:26.914164+0000 mgr.x (mgr.14150) 87 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:28.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:27 vm04 bash[19581]: cluster 2026-03-09T14:18:26.914164+0000 mgr.x (mgr.14150) 87 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:28.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:27 vm04 bash[19581]: audit 2026-03-09T14:18:27.315916+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:18:28.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:27 vm04 bash[19581]: audit 2026-03-09T14:18:27.315916+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:18:28.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:27 vm04 bash[19581]: audit 2026-03-09T14:18:27.320309+0000 mon.a (mon.0) 292 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:28.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:27 vm04 bash[19581]: audit 2026-03-09T14:18:27.320309+0000 mon.a (mon.0) 292 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:28.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:27 vm04 bash[19581]: audit 2026-03-09T14:18:27.323920+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:28.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:27 vm04 bash[19581]: audit 2026-03-09T14:18:27.323920+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:28.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:27 vm05 bash[20070]: cluster 2026-03-09T14:18:26.914164+0000 mgr.x (mgr.14150) 87 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:28.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:27 vm05 bash[20070]: cluster 2026-03-09T14:18:26.914164+0000 mgr.x (mgr.14150) 87 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:28.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:27 vm05 bash[20070]: audit 2026-03-09T14:18:27.315916+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:18:28.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:27 vm05 bash[20070]: audit 2026-03-09T14:18:27.315916+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:18:28.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:27 vm05 bash[20070]: audit 2026-03-09T14:18:27.320309+0000 mon.a (mon.0) 292 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:28.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:27 vm05 bash[20070]: audit 2026-03-09T14:18:27.320309+0000 mon.a (mon.0) 292 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:28.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:27 vm05 bash[20070]: audit 2026-03-09T14:18:27.323920+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:28.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:27 vm05 bash[20070]: audit 2026-03-09T14:18:27.323920+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:28.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:27 vm03 bash[17524]: cluster 2026-03-09T14:18:26.914164+0000 mgr.x (mgr.14150) 87 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:28.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:27 vm03 bash[17524]: cluster 2026-03-09T14:18:26.914164+0000 mgr.x (mgr.14150) 87 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:28.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:27 vm03 bash[17524]: audit 2026-03-09T14:18:27.315916+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:18:28.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:27 vm03 bash[17524]: audit 2026-03-09T14:18:27.315916+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:18:28.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:27 vm03 bash[17524]: audit 2026-03-09T14:18:27.320309+0000 mon.a (mon.0) 292 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:28.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:27 vm03 bash[17524]: audit 2026-03-09T14:18:27.320309+0000 mon.a (mon.0) 292 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:28.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:27 vm03 bash[17524]: audit 2026-03-09T14:18:27.323920+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:28.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:27 vm03 bash[17524]: audit 2026-03-09T14:18:27.323920+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:30.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:30 vm04 bash[19581]: cluster 2026-03-09T14:18:28.914312+0000 mgr.x (mgr.14150) 88 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:30.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:30 vm04 bash[19581]: cluster 2026-03-09T14:18:28.914312+0000 mgr.x (mgr.14150) 88 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:30.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:30 vm05 bash[20070]: cluster 2026-03-09T14:18:28.914312+0000 mgr.x (mgr.14150) 88 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:30.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:30 vm05 bash[20070]: cluster 2026-03-09T14:18:28.914312+0000 mgr.x (mgr.14150) 88 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:30.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:30 vm03 bash[17524]: cluster 2026-03-09T14:18:28.914312+0000 mgr.x (mgr.14150) 88 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:30.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:30 vm03 bash[17524]: cluster 2026-03-09T14:18:28.914312+0000 mgr.x (mgr.14150) 88 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:31.511 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:31 vm04 bash[19581]: audit 2026-03-09T14:18:30.554940+0000 mon.a (mon.0) 294 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1075788976,v1:192.168.123.103:6803/1075788976]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T14:18:31.511 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:31 vm04 bash[19581]: audit 2026-03-09T14:18:30.554940+0000 mon.a (mon.0) 294 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1075788976,v1:192.168.123.103:6803/1075788976]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T14:18:31.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:31 vm05 bash[20070]: audit 2026-03-09T14:18:30.554940+0000 mon.a (mon.0) 294 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1075788976,v1:192.168.123.103:6803/1075788976]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T14:18:31.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:31 vm05 bash[20070]: audit 2026-03-09T14:18:30.554940+0000 mon.a (mon.0) 294 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1075788976,v1:192.168.123.103:6803/1075788976]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T14:18:31.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:31 vm03 bash[17524]: audit 2026-03-09T14:18:30.554940+0000 mon.a (mon.0) 294 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1075788976,v1:192.168.123.103:6803/1075788976]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T14:18:31.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:31 vm03 bash[17524]: audit 2026-03-09T14:18:30.554940+0000 mon.a (mon.0) 294 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1075788976,v1:192.168.123.103:6803/1075788976]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T14:18:32.511 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:32 vm04 bash[19581]: cluster 2026-03-09T14:18:30.914465+0000 mgr.x (mgr.14150) 89 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:32.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:32 vm04 bash[19581]: cluster 2026-03-09T14:18:30.914465+0000 mgr.x (mgr.14150) 89 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:32.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:32 vm04 bash[19581]: audit 2026-03-09T14:18:31.151335+0000 mon.a (mon.0) 295 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1075788976,v1:192.168.123.103:6803/1075788976]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T14:18:32.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:32 vm04 bash[19581]: audit 2026-03-09T14:18:31.151335+0000 mon.a (mon.0) 295 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1075788976,v1:192.168.123.103:6803/1075788976]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T14:18:32.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:32 vm04 bash[19581]: cluster 2026-03-09T14:18:31.152653+0000 mon.a (mon.0) 296 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-09T14:18:32.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:32 vm04 bash[19581]: cluster 2026-03-09T14:18:31.152653+0000 mon.a (mon.0) 296 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-09T14:18:32.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:32 vm04 bash[19581]: audit 2026-03-09T14:18:31.152771+0000 mon.a (mon.0) 297 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:18:32.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:32 vm04 bash[19581]: audit 2026-03-09T14:18:31.152771+0000 mon.a (mon.0) 297 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:18:32.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:32 vm04 bash[19581]: audit 2026-03-09T14:18:31.153030+0000 mon.a (mon.0) 298 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1075788976,v1:192.168.123.103:6803/1075788976]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T14:18:32.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:32 vm04 bash[19581]: audit 2026-03-09T14:18:31.153030+0000 mon.a (mon.0) 298 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1075788976,v1:192.168.123.103:6803/1075788976]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T14:18:32.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:32 vm05 bash[20070]: cluster 2026-03-09T14:18:30.914465+0000 mgr.x (mgr.14150) 89 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:32.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:32 vm05 bash[20070]: cluster 2026-03-09T14:18:30.914465+0000 mgr.x (mgr.14150) 89 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:32.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:32 vm05 bash[20070]: audit 2026-03-09T14:18:31.151335+0000 mon.a (mon.0) 295 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1075788976,v1:192.168.123.103:6803/1075788976]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T14:18:32.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:32 vm05 bash[20070]: audit 2026-03-09T14:18:31.151335+0000 mon.a (mon.0) 295 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1075788976,v1:192.168.123.103:6803/1075788976]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T14:18:32.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:32 vm05 bash[20070]: cluster 2026-03-09T14:18:31.152653+0000 mon.a (mon.0) 296 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-09T14:18:32.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:32 vm05 bash[20070]: cluster 2026-03-09T14:18:31.152653+0000 mon.a (mon.0) 296 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-09T14:18:32.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:32 vm05 bash[20070]: audit 2026-03-09T14:18:31.152771+0000 mon.a (mon.0) 297 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:18:32.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:32 vm05 bash[20070]: audit 2026-03-09T14:18:31.152771+0000 mon.a (mon.0) 297 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:18:32.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:32 vm05 bash[20070]: audit 2026-03-09T14:18:31.153030+0000 mon.a (mon.0) 298 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1075788976,v1:192.168.123.103:6803/1075788976]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T14:18:32.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:32 vm05 bash[20070]: audit 2026-03-09T14:18:31.153030+0000 mon.a (mon.0) 298 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1075788976,v1:192.168.123.103:6803/1075788976]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T14:18:32.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:32 vm03 bash[17524]: cluster 2026-03-09T14:18:30.914465+0000 mgr.x (mgr.14150) 89 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:32.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:32 vm03 bash[17524]: cluster 2026-03-09T14:18:30.914465+0000 mgr.x (mgr.14150) 89 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:32.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:32 vm03 bash[17524]: audit 2026-03-09T14:18:31.151335+0000 mon.a (mon.0) 295 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1075788976,v1:192.168.123.103:6803/1075788976]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T14:18:32.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:32 vm03 bash[17524]: audit 2026-03-09T14:18:31.151335+0000 mon.a (mon.0) 295 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1075788976,v1:192.168.123.103:6803/1075788976]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T14:18:32.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:32 vm03 bash[17524]: cluster 2026-03-09T14:18:31.152653+0000 mon.a (mon.0) 296 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-09T14:18:32.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:32 vm03 bash[17524]: cluster 2026-03-09T14:18:31.152653+0000 mon.a (mon.0) 296 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-09T14:18:32.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:32 vm03 bash[17524]: audit 2026-03-09T14:18:31.152771+0000 mon.a (mon.0) 297 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:18:32.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:32 vm03 bash[17524]: audit 2026-03-09T14:18:31.152771+0000 mon.a (mon.0) 297 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:18:32.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:32 vm03 bash[17524]: audit 2026-03-09T14:18:31.153030+0000 mon.a (mon.0) 298 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1075788976,v1:192.168.123.103:6803/1075788976]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T14:18:32.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:32 vm03 bash[17524]: audit 2026-03-09T14:18:31.153030+0000 mon.a (mon.0) 298 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1075788976,v1:192.168.123.103:6803/1075788976]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T14:18:33.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:33 vm04 bash[19581]: audit 2026-03-09T14:18:32.154240+0000 mon.a (mon.0) 299 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1075788976,v1:192.168.123.103:6803/1075788976]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-09T14:18:33.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:33 vm04 bash[19581]: audit 2026-03-09T14:18:32.154240+0000 mon.a (mon.0) 299 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1075788976,v1:192.168.123.103:6803/1075788976]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-09T14:18:33.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:33 vm04 bash[19581]: cluster 2026-03-09T14:18:32.156194+0000 mon.a (mon.0) 300 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-09T14:18:33.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:33 vm04 bash[19581]: cluster 2026-03-09T14:18:32.156194+0000 mon.a (mon.0) 300 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-09T14:18:33.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:33 vm04 bash[19581]: audit 2026-03-09T14:18:32.157310+0000 mon.a (mon.0) 301 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:18:33.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:33 vm04 bash[19581]: audit 2026-03-09T14:18:32.157310+0000 mon.a (mon.0) 301 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:18:33.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:33 vm04 bash[19581]: audit 2026-03-09T14:18:32.161898+0000 mon.a (mon.0) 302 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:18:33.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:33 vm04 bash[19581]: audit 2026-03-09T14:18:32.161898+0000 mon.a (mon.0) 302 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:18:33.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:33 vm05 bash[20070]: audit 2026-03-09T14:18:32.154240+0000 mon.a (mon.0) 299 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1075788976,v1:192.168.123.103:6803/1075788976]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-09T14:18:33.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:33 vm05 bash[20070]: audit 2026-03-09T14:18:32.154240+0000 mon.a (mon.0) 299 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1075788976,v1:192.168.123.103:6803/1075788976]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-09T14:18:33.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:33 vm05 bash[20070]: cluster 2026-03-09T14:18:32.156194+0000 mon.a (mon.0) 300 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-09T14:18:33.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:33 vm05 bash[20070]: cluster 2026-03-09T14:18:32.156194+0000 mon.a (mon.0) 300 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-09T14:18:33.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:33 vm05 bash[20070]: audit 2026-03-09T14:18:32.157310+0000 mon.a (mon.0) 301 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:18:33.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:33 vm05 bash[20070]: audit 2026-03-09T14:18:32.157310+0000 mon.a (mon.0) 301 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:18:33.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:33 vm05 bash[20070]: audit 2026-03-09T14:18:32.161898+0000 mon.a (mon.0) 302 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:18:33.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:33 vm05 bash[20070]: audit 2026-03-09T14:18:32.161898+0000 mon.a (mon.0) 302 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:18:33.558 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:33 vm03 bash[17524]: audit 2026-03-09T14:18:32.154240+0000 mon.a (mon.0) 299 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1075788976,v1:192.168.123.103:6803/1075788976]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-09T14:18:33.558 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:33 vm03 bash[17524]: audit 2026-03-09T14:18:32.154240+0000 mon.a (mon.0) 299 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1075788976,v1:192.168.123.103:6803/1075788976]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-09T14:18:33.558 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:33 vm03 bash[17524]: cluster 2026-03-09T14:18:32.156194+0000 mon.a (mon.0) 300 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-09T14:18:33.558 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:33 vm03 bash[17524]: cluster 2026-03-09T14:18:32.156194+0000 mon.a (mon.0) 300 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-09T14:18:33.558 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:33 vm03 bash[17524]: audit 2026-03-09T14:18:32.157310+0000 mon.a (mon.0) 301 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:18:33.558 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:33 vm03 bash[17524]: audit 2026-03-09T14:18:32.157310+0000 mon.a (mon.0) 301 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:18:33.558 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:33 vm03 bash[17524]: audit 2026-03-09T14:18:32.161898+0000 mon.a (mon.0) 302 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:18:33.558 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:33 vm03 bash[17524]: audit 2026-03-09T14:18:32.161898+0000 mon.a (mon.0) 302 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:18:34.395 INFO:teuthology.orchestra.run.vm03.stdout:Created osd(s) 0 on host 'vm03' 2026-03-09T14:18:34.443 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:34 vm03 bash[17524]: cluster 2026-03-09T14:18:31.521352+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:18:34.443 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:34 vm03 bash[17524]: cluster 2026-03-09T14:18:31.521352+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:18:34.443 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:34 vm03 bash[17524]: cluster 2026-03-09T14:18:31.521401+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:18:34.443 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:34 vm03 bash[17524]: cluster 2026-03-09T14:18:31.521401+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:18:34.443 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:34 vm03 bash[17524]: cluster 2026-03-09T14:18:32.914611+0000 mgr.x (mgr.14150) 90 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:34.443 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:34 vm03 bash[17524]: cluster 2026-03-09T14:18:32.914611+0000 mgr.x (mgr.14150) 90 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:34.443 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:34 vm03 bash[17524]: audit 2026-03-09T14:18:33.159620+0000 mon.a (mon.0) 303 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:18:34.443 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:34 vm03 bash[17524]: audit 2026-03-09T14:18:33.159620+0000 mon.a (mon.0) 303 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:18:34.443 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:34 vm03 bash[17524]: cluster 2026-03-09T14:18:33.178738+0000 mon.a (mon.0) 304 : cluster [INF] osd.0 [v2:192.168.123.103:6802/1075788976,v1:192.168.123.103:6803/1075788976] boot 2026-03-09T14:18:34.443 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:34 vm03 bash[17524]: cluster 2026-03-09T14:18:33.178738+0000 mon.a (mon.0) 304 : cluster [INF] osd.0 [v2:192.168.123.103:6802/1075788976,v1:192.168.123.103:6803/1075788976] boot 2026-03-09T14:18:34.443 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:34 vm03 bash[17524]: cluster 2026-03-09T14:18:33.178761+0000 mon.a (mon.0) 305 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-09T14:18:34.444 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:34 vm03 bash[17524]: cluster 2026-03-09T14:18:33.178761+0000 mon.a (mon.0) 305 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-09T14:18:34.444 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:34 vm03 bash[17524]: audit 2026-03-09T14:18:33.178843+0000 mon.a (mon.0) 306 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:18:34.444 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:34 vm03 bash[17524]: audit 2026-03-09T14:18:33.178843+0000 mon.a (mon.0) 306 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:18:34.444 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:34 vm03 bash[17524]: audit 2026-03-09T14:18:33.389288+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:34.444 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:34 vm03 bash[17524]: audit 2026-03-09T14:18:33.389288+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:34.444 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:34 vm03 bash[17524]: audit 2026-03-09T14:18:33.395059+0000 mon.a (mon.0) 308 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:34.444 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:34 vm03 bash[17524]: audit 2026-03-09T14:18:33.395059+0000 mon.a (mon.0) 308 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:34.444 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:34 vm03 bash[17524]: audit 2026-03-09T14:18:33.764980+0000 mon.a (mon.0) 309 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:34.444 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:34 vm03 bash[17524]: audit 2026-03-09T14:18:33.764980+0000 mon.a (mon.0) 309 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:34.444 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:34 vm03 bash[17524]: audit 2026-03-09T14:18:33.765481+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:18:34.444 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:34 vm03 bash[17524]: audit 2026-03-09T14:18:33.765481+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:18:34.444 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:34 vm03 bash[17524]: audit 2026-03-09T14:18:33.769899+0000 mon.a (mon.0) 311 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:34.444 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:34 vm03 bash[17524]: audit 2026-03-09T14:18:33.769899+0000 mon.a (mon.0) 311 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:34.488 DEBUG:teuthology.orchestra.run.vm03:osd.0> sudo journalctl -f -n 0 -u ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@osd.0.service 2026-03-09T14:18:34.489 INFO:tasks.cephadm:Deploying osd.1 on vm03 with /dev/vdd... 2026-03-09T14:18:34.489 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- lvm zap /dev/vdd 2026-03-09T14:18:34.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:34 vm04 bash[19581]: cluster 2026-03-09T14:18:31.521352+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:18:34.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:34 vm04 bash[19581]: cluster 2026-03-09T14:18:31.521352+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:18:34.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:34 vm04 bash[19581]: cluster 2026-03-09T14:18:31.521401+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:18:34.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:34 vm04 bash[19581]: cluster 2026-03-09T14:18:31.521401+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:18:34.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:34 vm04 bash[19581]: cluster 2026-03-09T14:18:32.914611+0000 mgr.x (mgr.14150) 90 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:34.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:34 vm04 bash[19581]: cluster 2026-03-09T14:18:32.914611+0000 mgr.x (mgr.14150) 90 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:34.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:34 vm04 bash[19581]: audit 2026-03-09T14:18:33.159620+0000 mon.a (mon.0) 303 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:18:34.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:34 vm04 bash[19581]: audit 2026-03-09T14:18:33.159620+0000 mon.a (mon.0) 303 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:18:34.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:34 vm04 bash[19581]: cluster 2026-03-09T14:18:33.178738+0000 mon.a (mon.0) 304 : cluster [INF] osd.0 [v2:192.168.123.103:6802/1075788976,v1:192.168.123.103:6803/1075788976] boot 2026-03-09T14:18:34.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:34 vm04 bash[19581]: cluster 2026-03-09T14:18:33.178738+0000 mon.a (mon.0) 304 : cluster [INF] osd.0 [v2:192.168.123.103:6802/1075788976,v1:192.168.123.103:6803/1075788976] boot 2026-03-09T14:18:34.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:34 vm04 bash[19581]: cluster 2026-03-09T14:18:33.178761+0000 mon.a (mon.0) 305 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-09T14:18:34.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:34 vm04 bash[19581]: cluster 2026-03-09T14:18:33.178761+0000 mon.a (mon.0) 305 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-09T14:18:34.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:34 vm04 bash[19581]: audit 2026-03-09T14:18:33.178843+0000 mon.a (mon.0) 306 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:18:34.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:34 vm04 bash[19581]: audit 2026-03-09T14:18:33.178843+0000 mon.a (mon.0) 306 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:18:34.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:34 vm04 bash[19581]: audit 2026-03-09T14:18:33.389288+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:34.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:34 vm04 bash[19581]: audit 2026-03-09T14:18:33.389288+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:34.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:34 vm04 bash[19581]: audit 2026-03-09T14:18:33.395059+0000 mon.a (mon.0) 308 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:34.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:34 vm04 bash[19581]: audit 2026-03-09T14:18:33.395059+0000 mon.a (mon.0) 308 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:34.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:34 vm04 bash[19581]: audit 2026-03-09T14:18:33.764980+0000 mon.a (mon.0) 309 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:34.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:34 vm04 bash[19581]: audit 2026-03-09T14:18:33.764980+0000 mon.a (mon.0) 309 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:34.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:34 vm04 bash[19581]: audit 2026-03-09T14:18:33.765481+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:18:34.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:34 vm04 bash[19581]: audit 2026-03-09T14:18:33.765481+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:18:34.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:34 vm04 bash[19581]: audit 2026-03-09T14:18:33.769899+0000 mon.a (mon.0) 311 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:34.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:34 vm04 bash[19581]: audit 2026-03-09T14:18:33.769899+0000 mon.a (mon.0) 311 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:34.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:34 vm05 bash[20070]: cluster 2026-03-09T14:18:31.521352+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:18:34.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:34 vm05 bash[20070]: cluster 2026-03-09T14:18:31.521352+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:18:34.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:34 vm05 bash[20070]: cluster 2026-03-09T14:18:31.521401+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:18:34.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:34 vm05 bash[20070]: cluster 2026-03-09T14:18:31.521401+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:18:34.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:34 vm05 bash[20070]: cluster 2026-03-09T14:18:32.914611+0000 mgr.x (mgr.14150) 90 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:34.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:34 vm05 bash[20070]: cluster 2026-03-09T14:18:32.914611+0000 mgr.x (mgr.14150) 90 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:18:34.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:34 vm05 bash[20070]: audit 2026-03-09T14:18:33.159620+0000 mon.a (mon.0) 303 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:18:34.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:34 vm05 bash[20070]: audit 2026-03-09T14:18:33.159620+0000 mon.a (mon.0) 303 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:18:34.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:34 vm05 bash[20070]: cluster 2026-03-09T14:18:33.178738+0000 mon.a (mon.0) 304 : cluster [INF] osd.0 [v2:192.168.123.103:6802/1075788976,v1:192.168.123.103:6803/1075788976] boot 2026-03-09T14:18:34.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:34 vm05 bash[20070]: cluster 2026-03-09T14:18:33.178738+0000 mon.a (mon.0) 304 : cluster [INF] osd.0 [v2:192.168.123.103:6802/1075788976,v1:192.168.123.103:6803/1075788976] boot 2026-03-09T14:18:34.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:34 vm05 bash[20070]: cluster 2026-03-09T14:18:33.178761+0000 mon.a (mon.0) 305 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-09T14:18:34.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:34 vm05 bash[20070]: cluster 2026-03-09T14:18:33.178761+0000 mon.a (mon.0) 305 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-09T14:18:34.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:34 vm05 bash[20070]: audit 2026-03-09T14:18:33.178843+0000 mon.a (mon.0) 306 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:18:34.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:34 vm05 bash[20070]: audit 2026-03-09T14:18:33.178843+0000 mon.a (mon.0) 306 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:18:34.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:34 vm05 bash[20070]: audit 2026-03-09T14:18:33.389288+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:34.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:34 vm05 bash[20070]: audit 2026-03-09T14:18:33.389288+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:34.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:34 vm05 bash[20070]: audit 2026-03-09T14:18:33.395059+0000 mon.a (mon.0) 308 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:34.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:34 vm05 bash[20070]: audit 2026-03-09T14:18:33.395059+0000 mon.a (mon.0) 308 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:34.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:34 vm05 bash[20070]: audit 2026-03-09T14:18:33.764980+0000 mon.a (mon.0) 309 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:34.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:34 vm05 bash[20070]: audit 2026-03-09T14:18:33.764980+0000 mon.a (mon.0) 309 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:34.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:34 vm05 bash[20070]: audit 2026-03-09T14:18:33.765481+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:18:34.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:34 vm05 bash[20070]: audit 2026-03-09T14:18:33.765481+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:18:34.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:34 vm05 bash[20070]: audit 2026-03-09T14:18:33.769899+0000 mon.a (mon.0) 311 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:34.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:34 vm05 bash[20070]: audit 2026-03-09T14:18:33.769899+0000 mon.a (mon.0) 311 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:35.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:35 vm04 bash[19581]: audit 2026-03-09T14:18:34.377021+0000 mon.a (mon.0) 312 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:18:35.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:35 vm04 bash[19581]: audit 2026-03-09T14:18:34.377021+0000 mon.a (mon.0) 312 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:18:35.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:35 vm04 bash[19581]: audit 2026-03-09T14:18:34.382998+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:35.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:35 vm04 bash[19581]: audit 2026-03-09T14:18:34.382998+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:35.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:35 vm04 bash[19581]: audit 2026-03-09T14:18:34.387841+0000 mon.a (mon.0) 314 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:35.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:35 vm04 bash[19581]: audit 2026-03-09T14:18:34.387841+0000 mon.a (mon.0) 314 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:35.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:35 vm04 bash[19581]: cluster 2026-03-09T14:18:34.408509+0000 mon.a (mon.0) 315 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-09T14:18:35.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:35 vm04 bash[19581]: cluster 2026-03-09T14:18:34.408509+0000 mon.a (mon.0) 315 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-09T14:18:35.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:35 vm05 bash[20070]: audit 2026-03-09T14:18:34.377021+0000 mon.a (mon.0) 312 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:18:35.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:35 vm05 bash[20070]: audit 2026-03-09T14:18:34.377021+0000 mon.a (mon.0) 312 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:18:35.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:35 vm05 bash[20070]: audit 2026-03-09T14:18:34.382998+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:35.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:35 vm05 bash[20070]: audit 2026-03-09T14:18:34.382998+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:35.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:35 vm05 bash[20070]: audit 2026-03-09T14:18:34.387841+0000 mon.a (mon.0) 314 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:35.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:35 vm05 bash[20070]: audit 2026-03-09T14:18:34.387841+0000 mon.a (mon.0) 314 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:35.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:35 vm05 bash[20070]: cluster 2026-03-09T14:18:34.408509+0000 mon.a (mon.0) 315 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-09T14:18:35.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:35 vm05 bash[20070]: cluster 2026-03-09T14:18:34.408509+0000 mon.a (mon.0) 315 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-09T14:18:35.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:35 vm03 bash[17524]: audit 2026-03-09T14:18:34.377021+0000 mon.a (mon.0) 312 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:18:35.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:35 vm03 bash[17524]: audit 2026-03-09T14:18:34.377021+0000 mon.a (mon.0) 312 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:18:35.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:35 vm03 bash[17524]: audit 2026-03-09T14:18:34.382998+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:35.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:35 vm03 bash[17524]: audit 2026-03-09T14:18:34.382998+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:35.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:35 vm03 bash[17524]: audit 2026-03-09T14:18:34.387841+0000 mon.a (mon.0) 314 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:35.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:35 vm03 bash[17524]: audit 2026-03-09T14:18:34.387841+0000 mon.a (mon.0) 314 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:35.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:35 vm03 bash[17524]: cluster 2026-03-09T14:18:34.408509+0000 mon.a (mon.0) 315 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-09T14:18:35.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:35 vm03 bash[17524]: cluster 2026-03-09T14:18:34.408509+0000 mon.a (mon.0) 315 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-09T14:18:36.761 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:36 vm04 bash[19581]: cluster 2026-03-09T14:18:34.914791+0000 mgr.x (mgr.14150) 91 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:36.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:36 vm04 bash[19581]: cluster 2026-03-09T14:18:34.914791+0000 mgr.x (mgr.14150) 91 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:36.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:36 vm05 bash[20070]: cluster 2026-03-09T14:18:34.914791+0000 mgr.x (mgr.14150) 91 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:36.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:36 vm05 bash[20070]: cluster 2026-03-09T14:18:34.914791+0000 mgr.x (mgr.14150) 91 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:36.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:36 vm03 bash[17524]: cluster 2026-03-09T14:18:34.914791+0000 mgr.x (mgr.14150) 91 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:36.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:36 vm03 bash[17524]: cluster 2026-03-09T14:18:34.914791+0000 mgr.x (mgr.14150) 91 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:38.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:38 vm04 bash[19581]: cluster 2026-03-09T14:18:36.914980+0000 mgr.x (mgr.14150) 92 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:38.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:38 vm04 bash[19581]: cluster 2026-03-09T14:18:36.914980+0000 mgr.x (mgr.14150) 92 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:38.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:38 vm05 bash[20070]: cluster 2026-03-09T14:18:36.914980+0000 mgr.x (mgr.14150) 92 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:38.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:38 vm05 bash[20070]: cluster 2026-03-09T14:18:36.914980+0000 mgr.x (mgr.14150) 92 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:38.806 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:38 vm03 bash[17524]: cluster 2026-03-09T14:18:36.914980+0000 mgr.x (mgr.14150) 92 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:38.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:38 vm03 bash[17524]: cluster 2026-03-09T14:18:36.914980+0000 mgr.x (mgr.14150) 92 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:39.149 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:18:39.964 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:18:39.978 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph orch daemon add osd vm03:/dev/vdd 2026-03-09T14:18:40.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:40 vm04 bash[19581]: cluster 2026-03-09T14:18:38.915170+0000 mgr.x (mgr.14150) 93 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:40.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:40 vm04 bash[19581]: cluster 2026-03-09T14:18:38.915170+0000 mgr.x (mgr.14150) 93 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:40.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:40 vm05 bash[20070]: cluster 2026-03-09T14:18:38.915170+0000 mgr.x (mgr.14150) 93 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:40.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:40 vm05 bash[20070]: cluster 2026-03-09T14:18:38.915170+0000 mgr.x (mgr.14150) 93 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:40.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:40 vm03 bash[17524]: cluster 2026-03-09T14:18:38.915170+0000 mgr.x (mgr.14150) 93 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:40.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:40 vm03 bash[17524]: cluster 2026-03-09T14:18:38.915170+0000 mgr.x (mgr.14150) 93 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:42.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:41 vm04 bash[19581]: cephadm 2026-03-09T14:18:40.656663+0000 mgr.x (mgr.14150) 94 : cephadm [INF] Detected new or changed devices on vm03 2026-03-09T14:18:42.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:41 vm04 bash[19581]: cephadm 2026-03-09T14:18:40.656663+0000 mgr.x (mgr.14150) 94 : cephadm [INF] Detected new or changed devices on vm03 2026-03-09T14:18:42.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:41 vm04 bash[19581]: audit 2026-03-09T14:18:40.662447+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:42.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:41 vm04 bash[19581]: audit 2026-03-09T14:18:40.662447+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:42.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:41 vm04 bash[19581]: audit 2026-03-09T14:18:40.666553+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:42.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:41 vm04 bash[19581]: audit 2026-03-09T14:18:40.666553+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:42.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:41 vm04 bash[19581]: audit 2026-03-09T14:18:40.667637+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:18:42.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:41 vm04 bash[19581]: audit 2026-03-09T14:18:40.667637+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:18:42.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:41 vm04 bash[19581]: cephadm 2026-03-09T14:18:40.668020+0000 mgr.x (mgr.14150) 95 : cephadm [INF] Adjusting osd_memory_target on vm03 to 455.7M 2026-03-09T14:18:42.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:41 vm04 bash[19581]: cephadm 2026-03-09T14:18:40.668020+0000 mgr.x (mgr.14150) 95 : cephadm [INF] Adjusting osd_memory_target on vm03 to 455.7M 2026-03-09T14:18:42.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:41 vm04 bash[19581]: cephadm 2026-03-09T14:18:40.668540+0000 mgr.x (mgr.14150) 96 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-09T14:18:42.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:41 vm04 bash[19581]: cephadm 2026-03-09T14:18:40.668540+0000 mgr.x (mgr.14150) 96 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-09T14:18:42.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:41 vm04 bash[19581]: audit 2026-03-09T14:18:40.668869+0000 mon.a (mon.0) 319 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:42.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:41 vm04 bash[19581]: audit 2026-03-09T14:18:40.668869+0000 mon.a (mon.0) 319 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:42.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:41 vm04 bash[19581]: audit 2026-03-09T14:18:40.669342+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:18:42.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:41 vm04 bash[19581]: audit 2026-03-09T14:18:40.669342+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:18:42.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:41 vm04 bash[19581]: audit 2026-03-09T14:18:40.673521+0000 mon.a (mon.0) 321 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:42.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:41 vm04 bash[19581]: audit 2026-03-09T14:18:40.673521+0000 mon.a (mon.0) 321 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:42.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:41 vm04 bash[19581]: cluster 2026-03-09T14:18:40.915591+0000 mgr.x (mgr.14150) 97 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:42.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:41 vm04 bash[19581]: cluster 2026-03-09T14:18:40.915591+0000 mgr.x (mgr.14150) 97 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:42.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:41 vm05 bash[20070]: cephadm 2026-03-09T14:18:40.656663+0000 mgr.x (mgr.14150) 94 : cephadm [INF] Detected new or changed devices on vm03 2026-03-09T14:18:42.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:41 vm05 bash[20070]: cephadm 2026-03-09T14:18:40.656663+0000 mgr.x (mgr.14150) 94 : cephadm [INF] Detected new or changed devices on vm03 2026-03-09T14:18:42.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:41 vm05 bash[20070]: audit 2026-03-09T14:18:40.662447+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:42.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:41 vm05 bash[20070]: audit 2026-03-09T14:18:40.662447+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:42.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:41 vm05 bash[20070]: audit 2026-03-09T14:18:40.666553+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:42.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:41 vm05 bash[20070]: audit 2026-03-09T14:18:40.666553+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:42.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:41 vm05 bash[20070]: audit 2026-03-09T14:18:40.667637+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:18:42.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:41 vm05 bash[20070]: audit 2026-03-09T14:18:40.667637+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:18:42.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:41 vm05 bash[20070]: cephadm 2026-03-09T14:18:40.668020+0000 mgr.x (mgr.14150) 95 : cephadm [INF] Adjusting osd_memory_target on vm03 to 455.7M 2026-03-09T14:18:42.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:41 vm05 bash[20070]: cephadm 2026-03-09T14:18:40.668020+0000 mgr.x (mgr.14150) 95 : cephadm [INF] Adjusting osd_memory_target on vm03 to 455.7M 2026-03-09T14:18:42.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:41 vm05 bash[20070]: cephadm 2026-03-09T14:18:40.668540+0000 mgr.x (mgr.14150) 96 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-09T14:18:42.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:41 vm05 bash[20070]: cephadm 2026-03-09T14:18:40.668540+0000 mgr.x (mgr.14150) 96 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-09T14:18:42.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:41 vm05 bash[20070]: audit 2026-03-09T14:18:40.668869+0000 mon.a (mon.0) 319 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:42.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:41 vm05 bash[20070]: audit 2026-03-09T14:18:40.668869+0000 mon.a (mon.0) 319 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:42.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:41 vm05 bash[20070]: audit 2026-03-09T14:18:40.669342+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:18:42.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:41 vm05 bash[20070]: audit 2026-03-09T14:18:40.669342+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:18:42.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:41 vm05 bash[20070]: audit 2026-03-09T14:18:40.673521+0000 mon.a (mon.0) 321 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:42.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:41 vm05 bash[20070]: audit 2026-03-09T14:18:40.673521+0000 mon.a (mon.0) 321 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:42.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:41 vm05 bash[20070]: cluster 2026-03-09T14:18:40.915591+0000 mgr.x (mgr.14150) 97 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:42.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:41 vm05 bash[20070]: cluster 2026-03-09T14:18:40.915591+0000 mgr.x (mgr.14150) 97 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:42.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:41 vm03 bash[17524]: cephadm 2026-03-09T14:18:40.656663+0000 mgr.x (mgr.14150) 94 : cephadm [INF] Detected new or changed devices on vm03 2026-03-09T14:18:42.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:41 vm03 bash[17524]: cephadm 2026-03-09T14:18:40.656663+0000 mgr.x (mgr.14150) 94 : cephadm [INF] Detected new or changed devices on vm03 2026-03-09T14:18:42.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:41 vm03 bash[17524]: audit 2026-03-09T14:18:40.662447+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:42.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:41 vm03 bash[17524]: audit 2026-03-09T14:18:40.662447+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:42.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:41 vm03 bash[17524]: audit 2026-03-09T14:18:40.666553+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:42.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:41 vm03 bash[17524]: audit 2026-03-09T14:18:40.666553+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:42.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:41 vm03 bash[17524]: audit 2026-03-09T14:18:40.667637+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:18:42.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:41 vm03 bash[17524]: audit 2026-03-09T14:18:40.667637+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:18:42.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:41 vm03 bash[17524]: cephadm 2026-03-09T14:18:40.668020+0000 mgr.x (mgr.14150) 95 : cephadm [INF] Adjusting osd_memory_target on vm03 to 455.7M 2026-03-09T14:18:42.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:41 vm03 bash[17524]: cephadm 2026-03-09T14:18:40.668020+0000 mgr.x (mgr.14150) 95 : cephadm [INF] Adjusting osd_memory_target on vm03 to 455.7M 2026-03-09T14:18:42.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:41 vm03 bash[17524]: cephadm 2026-03-09T14:18:40.668540+0000 mgr.x (mgr.14150) 96 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-09T14:18:42.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:41 vm03 bash[17524]: cephadm 2026-03-09T14:18:40.668540+0000 mgr.x (mgr.14150) 96 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-09T14:18:42.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:41 vm03 bash[17524]: audit 2026-03-09T14:18:40.668869+0000 mon.a (mon.0) 319 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:42.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:41 vm03 bash[17524]: audit 2026-03-09T14:18:40.668869+0000 mon.a (mon.0) 319 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:42.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:41 vm03 bash[17524]: audit 2026-03-09T14:18:40.669342+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:18:42.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:41 vm03 bash[17524]: audit 2026-03-09T14:18:40.669342+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:18:42.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:41 vm03 bash[17524]: audit 2026-03-09T14:18:40.673521+0000 mon.a (mon.0) 321 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:42.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:41 vm03 bash[17524]: audit 2026-03-09T14:18:40.673521+0000 mon.a (mon.0) 321 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:18:42.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:41 vm03 bash[17524]: cluster 2026-03-09T14:18:40.915591+0000 mgr.x (mgr.14150) 97 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:42.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:41 vm03 bash[17524]: cluster 2026-03-09T14:18:40.915591+0000 mgr.x (mgr.14150) 97 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:44.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:43 vm04 bash[19581]: cluster 2026-03-09T14:18:42.915788+0000 mgr.x (mgr.14150) 98 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:44.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:43 vm04 bash[19581]: cluster 2026-03-09T14:18:42.915788+0000 mgr.x (mgr.14150) 98 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:44.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:43 vm05 bash[20070]: cluster 2026-03-09T14:18:42.915788+0000 mgr.x (mgr.14150) 98 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:44.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:43 vm05 bash[20070]: cluster 2026-03-09T14:18:42.915788+0000 mgr.x (mgr.14150) 98 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:44.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:43 vm03 bash[17524]: cluster 2026-03-09T14:18:42.915788+0000 mgr.x (mgr.14150) 98 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:44.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:43 vm03 bash[17524]: cluster 2026-03-09T14:18:42.915788+0000 mgr.x (mgr.14150) 98 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:44.628 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:18:45.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:44 vm04 bash[19581]: audit 2026-03-09T14:18:44.865925+0000 mon.a (mon.0) 322 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:18:45.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:44 vm04 bash[19581]: audit 2026-03-09T14:18:44.865925+0000 mon.a (mon.0) 322 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:18:45.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:44 vm04 bash[19581]: audit 2026-03-09T14:18:44.867198+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:18:45.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:44 vm04 bash[19581]: audit 2026-03-09T14:18:44.867198+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:18:45.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:44 vm04 bash[19581]: audit 2026-03-09T14:18:44.867644+0000 mon.a (mon.0) 324 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:45.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:44 vm04 bash[19581]: audit 2026-03-09T14:18:44.867644+0000 mon.a (mon.0) 324 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:45.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:44 vm05 bash[20070]: audit 2026-03-09T14:18:44.865925+0000 mon.a (mon.0) 322 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:18:45.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:44 vm05 bash[20070]: audit 2026-03-09T14:18:44.865925+0000 mon.a (mon.0) 322 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:18:45.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:44 vm05 bash[20070]: audit 2026-03-09T14:18:44.867198+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:18:45.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:44 vm05 bash[20070]: audit 2026-03-09T14:18:44.867198+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:18:45.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:44 vm05 bash[20070]: audit 2026-03-09T14:18:44.867644+0000 mon.a (mon.0) 324 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:45.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:44 vm05 bash[20070]: audit 2026-03-09T14:18:44.867644+0000 mon.a (mon.0) 324 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:45.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:44 vm03 bash[17524]: audit 2026-03-09T14:18:44.865925+0000 mon.a (mon.0) 322 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:18:45.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:44 vm03 bash[17524]: audit 2026-03-09T14:18:44.865925+0000 mon.a (mon.0) 322 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:18:45.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:44 vm03 bash[17524]: audit 2026-03-09T14:18:44.867198+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:18:45.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:44 vm03 bash[17524]: audit 2026-03-09T14:18:44.867198+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:18:45.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:44 vm03 bash[17524]: audit 2026-03-09T14:18:44.867644+0000 mon.a (mon.0) 324 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:45.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:44 vm03 bash[17524]: audit 2026-03-09T14:18:44.867644+0000 mon.a (mon.0) 324 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:46.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:45 vm04 bash[19581]: audit 2026-03-09T14:18:44.864411+0000 mgr.x (mgr.14150) 99 : audit [DBG] from='client.24134 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:18:46.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:45 vm04 bash[19581]: audit 2026-03-09T14:18:44.864411+0000 mgr.x (mgr.14150) 99 : audit [DBG] from='client.24134 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:18:46.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:45 vm04 bash[19581]: cluster 2026-03-09T14:18:44.915976+0000 mgr.x (mgr.14150) 100 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:46.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:45 vm04 bash[19581]: cluster 2026-03-09T14:18:44.915976+0000 mgr.x (mgr.14150) 100 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:46.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:45 vm05 bash[20070]: audit 2026-03-09T14:18:44.864411+0000 mgr.x (mgr.14150) 99 : audit [DBG] from='client.24134 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:18:46.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:45 vm05 bash[20070]: audit 2026-03-09T14:18:44.864411+0000 mgr.x (mgr.14150) 99 : audit [DBG] from='client.24134 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:18:46.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:45 vm05 bash[20070]: cluster 2026-03-09T14:18:44.915976+0000 mgr.x (mgr.14150) 100 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:46.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:45 vm05 bash[20070]: cluster 2026-03-09T14:18:44.915976+0000 mgr.x (mgr.14150) 100 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:46.306 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:45 vm03 bash[17524]: audit 2026-03-09T14:18:44.864411+0000 mgr.x (mgr.14150) 99 : audit [DBG] from='client.24134 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:18:46.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:45 vm03 bash[17524]: audit 2026-03-09T14:18:44.864411+0000 mgr.x (mgr.14150) 99 : audit [DBG] from='client.24134 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:18:46.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:45 vm03 bash[17524]: cluster 2026-03-09T14:18:44.915976+0000 mgr.x (mgr.14150) 100 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:46.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:45 vm03 bash[17524]: cluster 2026-03-09T14:18:44.915976+0000 mgr.x (mgr.14150) 100 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:48.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:47 vm04 bash[19581]: cluster 2026-03-09T14:18:46.916225+0000 mgr.x (mgr.14150) 101 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:48.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:47 vm04 bash[19581]: cluster 2026-03-09T14:18:46.916225+0000 mgr.x (mgr.14150) 101 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:48.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:47 vm05 bash[20070]: cluster 2026-03-09T14:18:46.916225+0000 mgr.x (mgr.14150) 101 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:48.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:47 vm05 bash[20070]: cluster 2026-03-09T14:18:46.916225+0000 mgr.x (mgr.14150) 101 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:48.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:47 vm03 bash[17524]: cluster 2026-03-09T14:18:46.916225+0000 mgr.x (mgr.14150) 101 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:48.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:47 vm03 bash[17524]: cluster 2026-03-09T14:18:46.916225+0000 mgr.x (mgr.14150) 101 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:50.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:49 vm04 bash[19581]: cluster 2026-03-09T14:18:48.916462+0000 mgr.x (mgr.14150) 102 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:50.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:49 vm04 bash[19581]: cluster 2026-03-09T14:18:48.916462+0000 mgr.x (mgr.14150) 102 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:50.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:49 vm04 bash[19581]: audit 2026-03-09T14:18:49.218274+0000 mon.b (mon.2) 4 : audit [INF] from='client.? 192.168.123.103:0/3663886908' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0ee8add4-d132-4666-b7ad-a8416c3c05bf"}]: dispatch 2026-03-09T14:18:50.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:49 vm04 bash[19581]: audit 2026-03-09T14:18:49.218274+0000 mon.b (mon.2) 4 : audit [INF] from='client.? 192.168.123.103:0/3663886908' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0ee8add4-d132-4666-b7ad-a8416c3c05bf"}]: dispatch 2026-03-09T14:18:50.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:49 vm04 bash[19581]: audit 2026-03-09T14:18:49.219714+0000 mon.a (mon.0) 325 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0ee8add4-d132-4666-b7ad-a8416c3c05bf"}]: dispatch 2026-03-09T14:18:50.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:49 vm04 bash[19581]: audit 2026-03-09T14:18:49.219714+0000 mon.a (mon.0) 325 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0ee8add4-d132-4666-b7ad-a8416c3c05bf"}]: dispatch 2026-03-09T14:18:50.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:49 vm04 bash[19581]: audit 2026-03-09T14:18:49.222321+0000 mon.a (mon.0) 326 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0ee8add4-d132-4666-b7ad-a8416c3c05bf"}]': finished 2026-03-09T14:18:50.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:49 vm04 bash[19581]: audit 2026-03-09T14:18:49.222321+0000 mon.a (mon.0) 326 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0ee8add4-d132-4666-b7ad-a8416c3c05bf"}]': finished 2026-03-09T14:18:50.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:49 vm04 bash[19581]: cluster 2026-03-09T14:18:49.224607+0000 mon.a (mon.0) 327 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-09T14:18:50.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:49 vm04 bash[19581]: cluster 2026-03-09T14:18:49.224607+0000 mon.a (mon.0) 327 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-09T14:18:50.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:49 vm04 bash[19581]: audit 2026-03-09T14:18:49.224741+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:18:50.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:49 vm04 bash[19581]: audit 2026-03-09T14:18:49.224741+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:18:50.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:49 vm04 bash[19581]: audit 2026-03-09T14:18:49.834633+0000 mon.b (mon.2) 5 : audit [DBG] from='client.? 192.168.123.103:0/1396137356' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:18:50.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:49 vm04 bash[19581]: audit 2026-03-09T14:18:49.834633+0000 mon.b (mon.2) 5 : audit [DBG] from='client.? 192.168.123.103:0/1396137356' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:18:50.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:49 vm05 bash[20070]: cluster 2026-03-09T14:18:48.916462+0000 mgr.x (mgr.14150) 102 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:50.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:49 vm05 bash[20070]: cluster 2026-03-09T14:18:48.916462+0000 mgr.x (mgr.14150) 102 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:50.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:49 vm05 bash[20070]: audit 2026-03-09T14:18:49.218274+0000 mon.b (mon.2) 4 : audit [INF] from='client.? 192.168.123.103:0/3663886908' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0ee8add4-d132-4666-b7ad-a8416c3c05bf"}]: dispatch 2026-03-09T14:18:50.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:49 vm05 bash[20070]: audit 2026-03-09T14:18:49.218274+0000 mon.b (mon.2) 4 : audit [INF] from='client.? 192.168.123.103:0/3663886908' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0ee8add4-d132-4666-b7ad-a8416c3c05bf"}]: dispatch 2026-03-09T14:18:50.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:49 vm05 bash[20070]: audit 2026-03-09T14:18:49.219714+0000 mon.a (mon.0) 325 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0ee8add4-d132-4666-b7ad-a8416c3c05bf"}]: dispatch 2026-03-09T14:18:50.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:49 vm05 bash[20070]: audit 2026-03-09T14:18:49.219714+0000 mon.a (mon.0) 325 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0ee8add4-d132-4666-b7ad-a8416c3c05bf"}]: dispatch 2026-03-09T14:18:50.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:49 vm05 bash[20070]: audit 2026-03-09T14:18:49.222321+0000 mon.a (mon.0) 326 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0ee8add4-d132-4666-b7ad-a8416c3c05bf"}]': finished 2026-03-09T14:18:50.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:49 vm05 bash[20070]: audit 2026-03-09T14:18:49.222321+0000 mon.a (mon.0) 326 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0ee8add4-d132-4666-b7ad-a8416c3c05bf"}]': finished 2026-03-09T14:18:50.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:49 vm05 bash[20070]: cluster 2026-03-09T14:18:49.224607+0000 mon.a (mon.0) 327 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-09T14:18:50.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:49 vm05 bash[20070]: cluster 2026-03-09T14:18:49.224607+0000 mon.a (mon.0) 327 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-09T14:18:50.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:49 vm05 bash[20070]: audit 2026-03-09T14:18:49.224741+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:18:50.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:49 vm05 bash[20070]: audit 2026-03-09T14:18:49.224741+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:18:50.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:49 vm05 bash[20070]: audit 2026-03-09T14:18:49.834633+0000 mon.b (mon.2) 5 : audit [DBG] from='client.? 192.168.123.103:0/1396137356' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:18:50.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:49 vm05 bash[20070]: audit 2026-03-09T14:18:49.834633+0000 mon.b (mon.2) 5 : audit [DBG] from='client.? 192.168.123.103:0/1396137356' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:18:50.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:49 vm03 bash[17524]: cluster 2026-03-09T14:18:48.916462+0000 mgr.x (mgr.14150) 102 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:50.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:49 vm03 bash[17524]: cluster 2026-03-09T14:18:48.916462+0000 mgr.x (mgr.14150) 102 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:50.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:49 vm03 bash[17524]: audit 2026-03-09T14:18:49.218274+0000 mon.b (mon.2) 4 : audit [INF] from='client.? 192.168.123.103:0/3663886908' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0ee8add4-d132-4666-b7ad-a8416c3c05bf"}]: dispatch 2026-03-09T14:18:50.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:49 vm03 bash[17524]: audit 2026-03-09T14:18:49.218274+0000 mon.b (mon.2) 4 : audit [INF] from='client.? 192.168.123.103:0/3663886908' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0ee8add4-d132-4666-b7ad-a8416c3c05bf"}]: dispatch 2026-03-09T14:18:50.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:49 vm03 bash[17524]: audit 2026-03-09T14:18:49.219714+0000 mon.a (mon.0) 325 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0ee8add4-d132-4666-b7ad-a8416c3c05bf"}]: dispatch 2026-03-09T14:18:50.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:49 vm03 bash[17524]: audit 2026-03-09T14:18:49.219714+0000 mon.a (mon.0) 325 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0ee8add4-d132-4666-b7ad-a8416c3c05bf"}]: dispatch 2026-03-09T14:18:50.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:49 vm03 bash[17524]: audit 2026-03-09T14:18:49.222321+0000 mon.a (mon.0) 326 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0ee8add4-d132-4666-b7ad-a8416c3c05bf"}]': finished 2026-03-09T14:18:50.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:49 vm03 bash[17524]: audit 2026-03-09T14:18:49.222321+0000 mon.a (mon.0) 326 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0ee8add4-d132-4666-b7ad-a8416c3c05bf"}]': finished 2026-03-09T14:18:50.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:49 vm03 bash[17524]: cluster 2026-03-09T14:18:49.224607+0000 mon.a (mon.0) 327 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-09T14:18:50.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:49 vm03 bash[17524]: cluster 2026-03-09T14:18:49.224607+0000 mon.a (mon.0) 327 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-09T14:18:50.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:49 vm03 bash[17524]: audit 2026-03-09T14:18:49.224741+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:18:50.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:49 vm03 bash[17524]: audit 2026-03-09T14:18:49.224741+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:18:50.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:49 vm03 bash[17524]: audit 2026-03-09T14:18:49.834633+0000 mon.b (mon.2) 5 : audit [DBG] from='client.? 192.168.123.103:0/1396137356' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:18:50.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:49 vm03 bash[17524]: audit 2026-03-09T14:18:49.834633+0000 mon.b (mon.2) 5 : audit [DBG] from='client.? 192.168.123.103:0/1396137356' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:18:52.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:51 vm04 bash[19581]: cluster 2026-03-09T14:18:50.916708+0000 mgr.x (mgr.14150) 103 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:52.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:51 vm04 bash[19581]: cluster 2026-03-09T14:18:50.916708+0000 mgr.x (mgr.14150) 103 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:52.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:51 vm05 bash[20070]: cluster 2026-03-09T14:18:50.916708+0000 mgr.x (mgr.14150) 103 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:52.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:51 vm05 bash[20070]: cluster 2026-03-09T14:18:50.916708+0000 mgr.x (mgr.14150) 103 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:52.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:51 vm03 bash[17524]: cluster 2026-03-09T14:18:50.916708+0000 mgr.x (mgr.14150) 103 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:52.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:51 vm03 bash[17524]: cluster 2026-03-09T14:18:50.916708+0000 mgr.x (mgr.14150) 103 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:54.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:53 vm04 bash[19581]: cluster 2026-03-09T14:18:52.916939+0000 mgr.x (mgr.14150) 104 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:54.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:53 vm04 bash[19581]: cluster 2026-03-09T14:18:52.916939+0000 mgr.x (mgr.14150) 104 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:54.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:53 vm05 bash[20070]: cluster 2026-03-09T14:18:52.916939+0000 mgr.x (mgr.14150) 104 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:54.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:53 vm05 bash[20070]: cluster 2026-03-09T14:18:52.916939+0000 mgr.x (mgr.14150) 104 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:54.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:53 vm03 bash[17524]: cluster 2026-03-09T14:18:52.916939+0000 mgr.x (mgr.14150) 104 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:54.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:53 vm03 bash[17524]: cluster 2026-03-09T14:18:52.916939+0000 mgr.x (mgr.14150) 104 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:56.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:56 vm04 bash[19581]: cluster 2026-03-09T14:18:54.917153+0000 mgr.x (mgr.14150) 105 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:56.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:56 vm04 bash[19581]: cluster 2026-03-09T14:18:54.917153+0000 mgr.x (mgr.14150) 105 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:56.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:56 vm05 bash[20070]: cluster 2026-03-09T14:18:54.917153+0000 mgr.x (mgr.14150) 105 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:56.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:56 vm05 bash[20070]: cluster 2026-03-09T14:18:54.917153+0000 mgr.x (mgr.14150) 105 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:56.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:56 vm03 bash[17524]: cluster 2026-03-09T14:18:54.917153+0000 mgr.x (mgr.14150) 105 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:56.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:56 vm03 bash[17524]: cluster 2026-03-09T14:18:54.917153+0000 mgr.x (mgr.14150) 105 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:58.283 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:58 vm03 bash[17524]: cluster 2026-03-09T14:18:56.917420+0000 mgr.x (mgr.14150) 106 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:58.283 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:58 vm03 bash[17524]: cluster 2026-03-09T14:18:56.917420+0000 mgr.x (mgr.14150) 106 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:58.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:58 vm04 bash[19581]: cluster 2026-03-09T14:18:56.917420+0000 mgr.x (mgr.14150) 106 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:58.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:58 vm04 bash[19581]: cluster 2026-03-09T14:18:56.917420+0000 mgr.x (mgr.14150) 106 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:58.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:58 vm05 bash[20070]: cluster 2026-03-09T14:18:56.917420+0000 mgr.x (mgr.14150) 106 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:58.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:58 vm05 bash[20070]: cluster 2026-03-09T14:18:56.917420+0000 mgr.x (mgr.14150) 106 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:18:59.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:59 vm03 bash[17524]: audit 2026-03-09T14:18:58.497539+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T14:18:59.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:59 vm03 bash[17524]: audit 2026-03-09T14:18:58.497539+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T14:18:59.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:59 vm03 bash[17524]: audit 2026-03-09T14:18:58.498062+0000 mon.a (mon.0) 330 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:59.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:59 vm03 bash[17524]: audit 2026-03-09T14:18:58.498062+0000 mon.a (mon.0) 330 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:59.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:59 vm03 bash[17524]: cephadm 2026-03-09T14:18:58.498486+0000 mgr.x (mgr.14150) 107 : cephadm [INF] Deploying daemon osd.1 on vm03 2026-03-09T14:18:59.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:59 vm03 bash[17524]: cephadm 2026-03-09T14:18:58.498486+0000 mgr.x (mgr.14150) 107 : cephadm [INF] Deploying daemon osd.1 on vm03 2026-03-09T14:18:59.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:59 vm04 bash[19581]: audit 2026-03-09T14:18:58.497539+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T14:18:59.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:59 vm04 bash[19581]: audit 2026-03-09T14:18:58.497539+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T14:18:59.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:59 vm04 bash[19581]: audit 2026-03-09T14:18:58.498062+0000 mon.a (mon.0) 330 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:59.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:59 vm04 bash[19581]: audit 2026-03-09T14:18:58.498062+0000 mon.a (mon.0) 330 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:59.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:59 vm04 bash[19581]: cephadm 2026-03-09T14:18:58.498486+0000 mgr.x (mgr.14150) 107 : cephadm [INF] Deploying daemon osd.1 on vm03 2026-03-09T14:18:59.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:18:59 vm04 bash[19581]: cephadm 2026-03-09T14:18:58.498486+0000 mgr.x (mgr.14150) 107 : cephadm [INF] Deploying daemon osd.1 on vm03 2026-03-09T14:18:59.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:59 vm05 bash[20070]: audit 2026-03-09T14:18:58.497539+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T14:18:59.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:59 vm05 bash[20070]: audit 2026-03-09T14:18:58.497539+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T14:18:59.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:59 vm05 bash[20070]: audit 2026-03-09T14:18:58.498062+0000 mon.a (mon.0) 330 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:59.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:59 vm05 bash[20070]: audit 2026-03-09T14:18:58.498062+0000 mon.a (mon.0) 330 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:18:59.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:59 vm05 bash[20070]: cephadm 2026-03-09T14:18:58.498486+0000 mgr.x (mgr.14150) 107 : cephadm [INF] Deploying daemon osd.1 on vm03 2026-03-09T14:18:59.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:18:59 vm05 bash[20070]: cephadm 2026-03-09T14:18:58.498486+0000 mgr.x (mgr.14150) 107 : cephadm [INF] Deploying daemon osd.1 on vm03 2026-03-09T14:18:59.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:59 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:18:59.557 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:18:59 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:18:59.557 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:18:59 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:18:59.925 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:18:59 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:18:59.925 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:18:59 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:18:59.925 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:18:59 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:19:00.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:00 vm03 bash[17524]: cluster 2026-03-09T14:18:58.917647+0000 mgr.x (mgr.14150) 108 : cluster [DBG] pgmap v60: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:19:00.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:00 vm03 bash[17524]: cluster 2026-03-09T14:18:58.917647+0000 mgr.x (mgr.14150) 108 : cluster [DBG] pgmap v60: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:19:00.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:00 vm03 bash[17524]: audit 2026-03-09T14:18:59.670060+0000 mon.a (mon.0) 331 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:19:00.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:00 vm03 bash[17524]: audit 2026-03-09T14:18:59.670060+0000 mon.a (mon.0) 331 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:19:00.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:00 vm03 bash[17524]: audit 2026-03-09T14:18:59.674616+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:00.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:00 vm03 bash[17524]: audit 2026-03-09T14:18:59.674616+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:00.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:00 vm03 bash[17524]: audit 2026-03-09T14:18:59.678772+0000 mon.a (mon.0) 333 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:00.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:00 vm03 bash[17524]: audit 2026-03-09T14:18:59.678772+0000 mon.a (mon.0) 333 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:00.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:00 vm04 bash[19581]: cluster 2026-03-09T14:18:58.917647+0000 mgr.x (mgr.14150) 108 : cluster [DBG] pgmap v60: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:19:00.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:00 vm04 bash[19581]: cluster 2026-03-09T14:18:58.917647+0000 mgr.x (mgr.14150) 108 : cluster [DBG] pgmap v60: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:19:00.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:00 vm04 bash[19581]: audit 2026-03-09T14:18:59.670060+0000 mon.a (mon.0) 331 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:19:00.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:00 vm04 bash[19581]: audit 2026-03-09T14:18:59.670060+0000 mon.a (mon.0) 331 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:19:00.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:00 vm04 bash[19581]: audit 2026-03-09T14:18:59.674616+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:00.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:00 vm04 bash[19581]: audit 2026-03-09T14:18:59.674616+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:00.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:00 vm04 bash[19581]: audit 2026-03-09T14:18:59.678772+0000 mon.a (mon.0) 333 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:00.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:00 vm04 bash[19581]: audit 2026-03-09T14:18:59.678772+0000 mon.a (mon.0) 333 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:00.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:00 vm05 bash[20070]: cluster 2026-03-09T14:18:58.917647+0000 mgr.x (mgr.14150) 108 : cluster [DBG] pgmap v60: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:19:00.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:00 vm05 bash[20070]: cluster 2026-03-09T14:18:58.917647+0000 mgr.x (mgr.14150) 108 : cluster [DBG] pgmap v60: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:19:00.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:00 vm05 bash[20070]: audit 2026-03-09T14:18:59.670060+0000 mon.a (mon.0) 331 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:19:00.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:00 vm05 bash[20070]: audit 2026-03-09T14:18:59.670060+0000 mon.a (mon.0) 331 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:19:00.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:00 vm05 bash[20070]: audit 2026-03-09T14:18:59.674616+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:00.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:00 vm05 bash[20070]: audit 2026-03-09T14:18:59.674616+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:00.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:00 vm05 bash[20070]: audit 2026-03-09T14:18:59.678772+0000 mon.a (mon.0) 333 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:00.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:00 vm05 bash[20070]: audit 2026-03-09T14:18:59.678772+0000 mon.a (mon.0) 333 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:02.306 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:02 vm03 bash[17524]: cluster 2026-03-09T14:19:00.917880+0000 mgr.x (mgr.14150) 109 : cluster [DBG] pgmap v61: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:19:02.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:02 vm03 bash[17524]: cluster 2026-03-09T14:19:00.917880+0000 mgr.x (mgr.14150) 109 : cluster [DBG] pgmap v61: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:19:02.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:02 vm04 bash[19581]: cluster 2026-03-09T14:19:00.917880+0000 mgr.x (mgr.14150) 109 : cluster [DBG] pgmap v61: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:19:02.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:02 vm04 bash[19581]: cluster 2026-03-09T14:19:00.917880+0000 mgr.x (mgr.14150) 109 : cluster [DBG] pgmap v61: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:19:02.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:02 vm05 bash[20070]: cluster 2026-03-09T14:19:00.917880+0000 mgr.x (mgr.14150) 109 : cluster [DBG] pgmap v61: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:19:02.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:02 vm05 bash[20070]: cluster 2026-03-09T14:19:00.917880+0000 mgr.x (mgr.14150) 109 : cluster [DBG] pgmap v61: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:19:03.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:03 vm03 bash[17524]: audit 2026-03-09T14:19:02.982857+0000 mon.a (mon.0) 334 : audit [INF] from='osd.1 [v2:192.168.123.103:6810/2015646488,v1:192.168.123.103:6811/2015646488]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T14:19:03.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:03 vm03 bash[17524]: audit 2026-03-09T14:19:02.982857+0000 mon.a (mon.0) 334 : audit [INF] from='osd.1 [v2:192.168.123.103:6810/2015646488,v1:192.168.123.103:6811/2015646488]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T14:19:03.511 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:03 vm04 bash[19581]: audit 2026-03-09T14:19:02.982857+0000 mon.a (mon.0) 334 : audit [INF] from='osd.1 [v2:192.168.123.103:6810/2015646488,v1:192.168.123.103:6811/2015646488]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T14:19:03.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:03 vm04 bash[19581]: audit 2026-03-09T14:19:02.982857+0000 mon.a (mon.0) 334 : audit [INF] from='osd.1 [v2:192.168.123.103:6810/2015646488,v1:192.168.123.103:6811/2015646488]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T14:19:03.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:03 vm05 bash[20070]: audit 2026-03-09T14:19:02.982857+0000 mon.a (mon.0) 334 : audit [INF] from='osd.1 [v2:192.168.123.103:6810/2015646488,v1:192.168.123.103:6811/2015646488]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T14:19:03.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:03 vm05 bash[20070]: audit 2026-03-09T14:19:02.982857+0000 mon.a (mon.0) 334 : audit [INF] from='osd.1 [v2:192.168.123.103:6810/2015646488,v1:192.168.123.103:6811/2015646488]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T14:19:04.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:04 vm04 bash[19581]: cluster 2026-03-09T14:19:02.918145+0000 mgr.x (mgr.14150) 110 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:19:04.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:04 vm04 bash[19581]: cluster 2026-03-09T14:19:02.918145+0000 mgr.x (mgr.14150) 110 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:19:04.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:04 vm04 bash[19581]: audit 2026-03-09T14:19:03.069177+0000 mon.a (mon.0) 335 : audit [INF] from='osd.1 [v2:192.168.123.103:6810/2015646488,v1:192.168.123.103:6811/2015646488]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T14:19:04.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:04 vm04 bash[19581]: audit 2026-03-09T14:19:03.069177+0000 mon.a (mon.0) 335 : audit [INF] from='osd.1 [v2:192.168.123.103:6810/2015646488,v1:192.168.123.103:6811/2015646488]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T14:19:04.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:04 vm04 bash[19581]: cluster 2026-03-09T14:19:03.071043+0000 mon.a (mon.0) 336 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-09T14:19:04.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:04 vm04 bash[19581]: cluster 2026-03-09T14:19:03.071043+0000 mon.a (mon.0) 336 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-09T14:19:04.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:04 vm04 bash[19581]: audit 2026-03-09T14:19:03.071151+0000 mon.a (mon.0) 337 : audit [INF] from='osd.1 [v2:192.168.123.103:6810/2015646488,v1:192.168.123.103:6811/2015646488]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T14:19:04.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:04 vm04 bash[19581]: audit 2026-03-09T14:19:03.071151+0000 mon.a (mon.0) 337 : audit [INF] from='osd.1 [v2:192.168.123.103:6810/2015646488,v1:192.168.123.103:6811/2015646488]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T14:19:04.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:04 vm04 bash[19581]: audit 2026-03-09T14:19:03.071244+0000 mon.a (mon.0) 338 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:19:04.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:04 vm04 bash[19581]: audit 2026-03-09T14:19:03.071244+0000 mon.a (mon.0) 338 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:19:04.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:04 vm05 bash[20070]: cluster 2026-03-09T14:19:02.918145+0000 mgr.x (mgr.14150) 110 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:19:04.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:04 vm05 bash[20070]: cluster 2026-03-09T14:19:02.918145+0000 mgr.x (mgr.14150) 110 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:19:04.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:04 vm05 bash[20070]: audit 2026-03-09T14:19:03.069177+0000 mon.a (mon.0) 335 : audit [INF] from='osd.1 [v2:192.168.123.103:6810/2015646488,v1:192.168.123.103:6811/2015646488]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T14:19:04.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:04 vm05 bash[20070]: audit 2026-03-09T14:19:03.069177+0000 mon.a (mon.0) 335 : audit [INF] from='osd.1 [v2:192.168.123.103:6810/2015646488,v1:192.168.123.103:6811/2015646488]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T14:19:04.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:04 vm05 bash[20070]: cluster 2026-03-09T14:19:03.071043+0000 mon.a (mon.0) 336 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-09T14:19:04.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:04 vm05 bash[20070]: cluster 2026-03-09T14:19:03.071043+0000 mon.a (mon.0) 336 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-09T14:19:04.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:04 vm05 bash[20070]: audit 2026-03-09T14:19:03.071151+0000 mon.a (mon.0) 337 : audit [INF] from='osd.1 [v2:192.168.123.103:6810/2015646488,v1:192.168.123.103:6811/2015646488]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T14:19:04.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:04 vm05 bash[20070]: audit 2026-03-09T14:19:03.071151+0000 mon.a (mon.0) 337 : audit [INF] from='osd.1 [v2:192.168.123.103:6810/2015646488,v1:192.168.123.103:6811/2015646488]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T14:19:04.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:04 vm05 bash[20070]: audit 2026-03-09T14:19:03.071244+0000 mon.a (mon.0) 338 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:19:04.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:04 vm05 bash[20070]: audit 2026-03-09T14:19:03.071244+0000 mon.a (mon.0) 338 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:19:04.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:04 vm03 bash[17524]: cluster 2026-03-09T14:19:02.918145+0000 mgr.x (mgr.14150) 110 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:19:04.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:04 vm03 bash[17524]: cluster 2026-03-09T14:19:02.918145+0000 mgr.x (mgr.14150) 110 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:19:04.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:04 vm03 bash[17524]: audit 2026-03-09T14:19:03.069177+0000 mon.a (mon.0) 335 : audit [INF] from='osd.1 [v2:192.168.123.103:6810/2015646488,v1:192.168.123.103:6811/2015646488]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T14:19:04.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:04 vm03 bash[17524]: audit 2026-03-09T14:19:03.069177+0000 mon.a (mon.0) 335 : audit [INF] from='osd.1 [v2:192.168.123.103:6810/2015646488,v1:192.168.123.103:6811/2015646488]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T14:19:04.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:04 vm03 bash[17524]: cluster 2026-03-09T14:19:03.071043+0000 mon.a (mon.0) 336 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-09T14:19:04.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:04 vm03 bash[17524]: cluster 2026-03-09T14:19:03.071043+0000 mon.a (mon.0) 336 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-09T14:19:04.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:04 vm03 bash[17524]: audit 2026-03-09T14:19:03.071151+0000 mon.a (mon.0) 337 : audit [INF] from='osd.1 [v2:192.168.123.103:6810/2015646488,v1:192.168.123.103:6811/2015646488]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T14:19:04.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:04 vm03 bash[17524]: audit 2026-03-09T14:19:03.071151+0000 mon.a (mon.0) 337 : audit [INF] from='osd.1 [v2:192.168.123.103:6810/2015646488,v1:192.168.123.103:6811/2015646488]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T14:19:04.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:04 vm03 bash[17524]: audit 2026-03-09T14:19:03.071244+0000 mon.a (mon.0) 338 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:19:04.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:04 vm03 bash[17524]: audit 2026-03-09T14:19:03.071244+0000 mon.a (mon.0) 338 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:19:05.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:05 vm04 bash[19581]: audit 2026-03-09T14:19:04.072301+0000 mon.a (mon.0) 339 : audit [INF] from='osd.1 [v2:192.168.123.103:6810/2015646488,v1:192.168.123.103:6811/2015646488]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-09T14:19:05.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:05 vm04 bash[19581]: audit 2026-03-09T14:19:04.072301+0000 mon.a (mon.0) 339 : audit [INF] from='osd.1 [v2:192.168.123.103:6810/2015646488,v1:192.168.123.103:6811/2015646488]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-09T14:19:05.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:05 vm04 bash[19581]: cluster 2026-03-09T14:19:04.074832+0000 mon.a (mon.0) 340 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-09T14:19:05.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:05 vm04 bash[19581]: cluster 2026-03-09T14:19:04.074832+0000 mon.a (mon.0) 340 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-09T14:19:05.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:05 vm04 bash[19581]: audit 2026-03-09T14:19:04.075773+0000 mon.a (mon.0) 341 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:19:05.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:05 vm04 bash[19581]: audit 2026-03-09T14:19:04.075773+0000 mon.a (mon.0) 341 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:19:05.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:05 vm04 bash[19581]: audit 2026-03-09T14:19:04.089078+0000 mon.a (mon.0) 342 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:19:05.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:05 vm04 bash[19581]: audit 2026-03-09T14:19:04.089078+0000 mon.a (mon.0) 342 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:19:05.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:05 vm05 bash[20070]: audit 2026-03-09T14:19:04.072301+0000 mon.a (mon.0) 339 : audit [INF] from='osd.1 [v2:192.168.123.103:6810/2015646488,v1:192.168.123.103:6811/2015646488]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-09T14:19:05.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:05 vm05 bash[20070]: audit 2026-03-09T14:19:04.072301+0000 mon.a (mon.0) 339 : audit [INF] from='osd.1 [v2:192.168.123.103:6810/2015646488,v1:192.168.123.103:6811/2015646488]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-09T14:19:05.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:05 vm05 bash[20070]: cluster 2026-03-09T14:19:04.074832+0000 mon.a (mon.0) 340 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-09T14:19:05.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:05 vm05 bash[20070]: cluster 2026-03-09T14:19:04.074832+0000 mon.a (mon.0) 340 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-09T14:19:05.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:05 vm05 bash[20070]: audit 2026-03-09T14:19:04.075773+0000 mon.a (mon.0) 341 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:19:05.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:05 vm05 bash[20070]: audit 2026-03-09T14:19:04.075773+0000 mon.a (mon.0) 341 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:19:05.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:05 vm05 bash[20070]: audit 2026-03-09T14:19:04.089078+0000 mon.a (mon.0) 342 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:19:05.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:05 vm05 bash[20070]: audit 2026-03-09T14:19:04.089078+0000 mon.a (mon.0) 342 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:19:05.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:05 vm03 bash[17524]: audit 2026-03-09T14:19:04.072301+0000 mon.a (mon.0) 339 : audit [INF] from='osd.1 [v2:192.168.123.103:6810/2015646488,v1:192.168.123.103:6811/2015646488]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-09T14:19:05.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:05 vm03 bash[17524]: audit 2026-03-09T14:19:04.072301+0000 mon.a (mon.0) 339 : audit [INF] from='osd.1 [v2:192.168.123.103:6810/2015646488,v1:192.168.123.103:6811/2015646488]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-09T14:19:05.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:05 vm03 bash[17524]: cluster 2026-03-09T14:19:04.074832+0000 mon.a (mon.0) 340 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-09T14:19:05.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:05 vm03 bash[17524]: cluster 2026-03-09T14:19:04.074832+0000 mon.a (mon.0) 340 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-09T14:19:05.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:05 vm03 bash[17524]: audit 2026-03-09T14:19:04.075773+0000 mon.a (mon.0) 341 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:19:05.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:05 vm03 bash[17524]: audit 2026-03-09T14:19:04.075773+0000 mon.a (mon.0) 341 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:19:05.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:05 vm03 bash[17524]: audit 2026-03-09T14:19:04.089078+0000 mon.a (mon.0) 342 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:19:05.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:05 vm03 bash[17524]: audit 2026-03-09T14:19:04.089078+0000 mon.a (mon.0) 342 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:19:06.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:06 vm04 bash[19581]: cluster 2026-03-09T14:19:04.025820+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:19:06.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:06 vm04 bash[19581]: cluster 2026-03-09T14:19:04.025820+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:19:06.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:06 vm04 bash[19581]: cluster 2026-03-09T14:19:04.025879+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:19:06.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:06 vm04 bash[19581]: cluster 2026-03-09T14:19:04.025879+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:19:06.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:06 vm04 bash[19581]: cluster 2026-03-09T14:19:04.918363+0000 mgr.x (mgr.14150) 111 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:19:06.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:06 vm04 bash[19581]: cluster 2026-03-09T14:19:04.918363+0000 mgr.x (mgr.14150) 111 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:19:06.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:06 vm04 bash[19581]: audit 2026-03-09T14:19:05.078210+0000 mon.a (mon.0) 343 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:19:06.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:06 vm04 bash[19581]: audit 2026-03-09T14:19:05.078210+0000 mon.a (mon.0) 343 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:19:06.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:06 vm04 bash[19581]: cluster 2026-03-09T14:19:05.091731+0000 mon.a (mon.0) 344 : cluster [INF] osd.1 [v2:192.168.123.103:6810/2015646488,v1:192.168.123.103:6811/2015646488] boot 2026-03-09T14:19:06.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:06 vm04 bash[19581]: cluster 2026-03-09T14:19:05.091731+0000 mon.a (mon.0) 344 : cluster [INF] osd.1 [v2:192.168.123.103:6810/2015646488,v1:192.168.123.103:6811/2015646488] boot 2026-03-09T14:19:06.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:06 vm04 bash[19581]: cluster 2026-03-09T14:19:05.091755+0000 mon.a (mon.0) 345 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-09T14:19:06.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:06 vm04 bash[19581]: cluster 2026-03-09T14:19:05.091755+0000 mon.a (mon.0) 345 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-09T14:19:06.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:06 vm04 bash[19581]: audit 2026-03-09T14:19:05.091926+0000 mon.a (mon.0) 346 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:19:06.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:06 vm04 bash[19581]: audit 2026-03-09T14:19:05.091926+0000 mon.a (mon.0) 346 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:19:06.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:06 vm04 bash[19581]: audit 2026-03-09T14:19:05.883173+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:06.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:06 vm04 bash[19581]: audit 2026-03-09T14:19:05.883173+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:06.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:06 vm04 bash[19581]: audit 2026-03-09T14:19:05.887886+0000 mon.a (mon.0) 348 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:06.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:06 vm04 bash[19581]: audit 2026-03-09T14:19:05.887886+0000 mon.a (mon.0) 348 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:06.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:06 vm04 bash[19581]: audit 2026-03-09T14:19:05.888570+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:19:06.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:06 vm04 bash[19581]: audit 2026-03-09T14:19:05.888570+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:19:06.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:06 vm04 bash[19581]: audit 2026-03-09T14:19:05.889029+0000 mon.a (mon.0) 350 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:19:06.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:06 vm04 bash[19581]: audit 2026-03-09T14:19:05.889029+0000 mon.a (mon.0) 350 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:19:06.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:06 vm04 bash[19581]: audit 2026-03-09T14:19:05.892657+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:06.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:06 vm04 bash[19581]: audit 2026-03-09T14:19:05.892657+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:06.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:06 vm05 bash[20070]: cluster 2026-03-09T14:19:04.025820+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:19:06.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:06 vm05 bash[20070]: cluster 2026-03-09T14:19:04.025820+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:19:06.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:06 vm05 bash[20070]: cluster 2026-03-09T14:19:04.025879+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:19:06.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:06 vm05 bash[20070]: cluster 2026-03-09T14:19:04.025879+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:19:06.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:06 vm05 bash[20070]: cluster 2026-03-09T14:19:04.918363+0000 mgr.x (mgr.14150) 111 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:19:06.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:06 vm05 bash[20070]: cluster 2026-03-09T14:19:04.918363+0000 mgr.x (mgr.14150) 111 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:19:06.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:06 vm05 bash[20070]: audit 2026-03-09T14:19:05.078210+0000 mon.a (mon.0) 343 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:19:06.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:06 vm05 bash[20070]: audit 2026-03-09T14:19:05.078210+0000 mon.a (mon.0) 343 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:19:06.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:06 vm05 bash[20070]: cluster 2026-03-09T14:19:05.091731+0000 mon.a (mon.0) 344 : cluster [INF] osd.1 [v2:192.168.123.103:6810/2015646488,v1:192.168.123.103:6811/2015646488] boot 2026-03-09T14:19:06.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:06 vm05 bash[20070]: cluster 2026-03-09T14:19:05.091731+0000 mon.a (mon.0) 344 : cluster [INF] osd.1 [v2:192.168.123.103:6810/2015646488,v1:192.168.123.103:6811/2015646488] boot 2026-03-09T14:19:06.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:06 vm05 bash[20070]: cluster 2026-03-09T14:19:05.091755+0000 mon.a (mon.0) 345 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-09T14:19:06.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:06 vm05 bash[20070]: cluster 2026-03-09T14:19:05.091755+0000 mon.a (mon.0) 345 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-09T14:19:06.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:06 vm05 bash[20070]: audit 2026-03-09T14:19:05.091926+0000 mon.a (mon.0) 346 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:19:06.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:06 vm05 bash[20070]: audit 2026-03-09T14:19:05.091926+0000 mon.a (mon.0) 346 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:19:06.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:06 vm05 bash[20070]: audit 2026-03-09T14:19:05.883173+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:06.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:06 vm05 bash[20070]: audit 2026-03-09T14:19:05.883173+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:06.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:06 vm05 bash[20070]: audit 2026-03-09T14:19:05.887886+0000 mon.a (mon.0) 348 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:06.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:06 vm05 bash[20070]: audit 2026-03-09T14:19:05.887886+0000 mon.a (mon.0) 348 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:06.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:06 vm05 bash[20070]: audit 2026-03-09T14:19:05.888570+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:19:06.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:06 vm05 bash[20070]: audit 2026-03-09T14:19:05.888570+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:19:06.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:06 vm05 bash[20070]: audit 2026-03-09T14:19:05.889029+0000 mon.a (mon.0) 350 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:19:06.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:06 vm05 bash[20070]: audit 2026-03-09T14:19:05.889029+0000 mon.a (mon.0) 350 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:19:06.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:06 vm05 bash[20070]: audit 2026-03-09T14:19:05.892657+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:06.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:06 vm05 bash[20070]: audit 2026-03-09T14:19:05.892657+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:06.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:06 vm03 bash[17524]: cluster 2026-03-09T14:19:04.025820+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:19:06.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:06 vm03 bash[17524]: cluster 2026-03-09T14:19:04.025820+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:19:06.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:06 vm03 bash[17524]: cluster 2026-03-09T14:19:04.025879+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:19:06.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:06 vm03 bash[17524]: cluster 2026-03-09T14:19:04.025879+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:19:06.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:06 vm03 bash[17524]: cluster 2026-03-09T14:19:04.918363+0000 mgr.x (mgr.14150) 111 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:19:06.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:06 vm03 bash[17524]: cluster 2026-03-09T14:19:04.918363+0000 mgr.x (mgr.14150) 111 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:19:06.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:06 vm03 bash[17524]: audit 2026-03-09T14:19:05.078210+0000 mon.a (mon.0) 343 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:19:06.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:06 vm03 bash[17524]: audit 2026-03-09T14:19:05.078210+0000 mon.a (mon.0) 343 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:19:06.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:06 vm03 bash[17524]: cluster 2026-03-09T14:19:05.091731+0000 mon.a (mon.0) 344 : cluster [INF] osd.1 [v2:192.168.123.103:6810/2015646488,v1:192.168.123.103:6811/2015646488] boot 2026-03-09T14:19:06.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:06 vm03 bash[17524]: cluster 2026-03-09T14:19:05.091731+0000 mon.a (mon.0) 344 : cluster [INF] osd.1 [v2:192.168.123.103:6810/2015646488,v1:192.168.123.103:6811/2015646488] boot 2026-03-09T14:19:06.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:06 vm03 bash[17524]: cluster 2026-03-09T14:19:05.091755+0000 mon.a (mon.0) 345 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-09T14:19:06.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:06 vm03 bash[17524]: cluster 2026-03-09T14:19:05.091755+0000 mon.a (mon.0) 345 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-09T14:19:06.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:06 vm03 bash[17524]: audit 2026-03-09T14:19:05.091926+0000 mon.a (mon.0) 346 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:19:06.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:06 vm03 bash[17524]: audit 2026-03-09T14:19:05.091926+0000 mon.a (mon.0) 346 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:19:06.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:06 vm03 bash[17524]: audit 2026-03-09T14:19:05.883173+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:06.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:06 vm03 bash[17524]: audit 2026-03-09T14:19:05.883173+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:06.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:06 vm03 bash[17524]: audit 2026-03-09T14:19:05.887886+0000 mon.a (mon.0) 348 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:06.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:06 vm03 bash[17524]: audit 2026-03-09T14:19:05.887886+0000 mon.a (mon.0) 348 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:06.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:06 vm03 bash[17524]: audit 2026-03-09T14:19:05.888570+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:19:06.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:06 vm03 bash[17524]: audit 2026-03-09T14:19:05.888570+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:19:06.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:06 vm03 bash[17524]: audit 2026-03-09T14:19:05.889029+0000 mon.a (mon.0) 350 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:19:06.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:06 vm03 bash[17524]: audit 2026-03-09T14:19:05.889029+0000 mon.a (mon.0) 350 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:19:06.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:06 vm03 bash[17524]: audit 2026-03-09T14:19:05.892657+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:06.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:06 vm03 bash[17524]: audit 2026-03-09T14:19:05.892657+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:06.908 INFO:teuthology.orchestra.run.vm03.stdout:Created osd(s) 1 on host 'vm03' 2026-03-09T14:19:06.974 DEBUG:teuthology.orchestra.run.vm03:osd.1> sudo journalctl -f -n 0 -u ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@osd.1.service 2026-03-09T14:19:06.975 INFO:tasks.cephadm:Deploying osd.2 on vm04 with /dev/vde... 2026-03-09T14:19:06.975 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- lvm zap /dev/vde 2026-03-09T14:19:07.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:07 vm04 bash[19581]: audit 2026-03-09T14:19:06.887741+0000 mon.a (mon.0) 352 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:19:07.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:07 vm04 bash[19581]: audit 2026-03-09T14:19:06.887741+0000 mon.a (mon.0) 352 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:19:07.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:07 vm04 bash[19581]: audit 2026-03-09T14:19:06.892516+0000 mon.a (mon.0) 353 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:07.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:07 vm04 bash[19581]: audit 2026-03-09T14:19:06.892516+0000 mon.a (mon.0) 353 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:07.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:07 vm04 bash[19581]: cluster 2026-03-09T14:19:06.896825+0000 mon.a (mon.0) 354 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-09T14:19:07.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:07 vm04 bash[19581]: cluster 2026-03-09T14:19:06.896825+0000 mon.a (mon.0) 354 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-09T14:19:07.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:07 vm04 bash[19581]: audit 2026-03-09T14:19:06.900224+0000 mon.a (mon.0) 355 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:07.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:07 vm04 bash[19581]: audit 2026-03-09T14:19:06.900224+0000 mon.a (mon.0) 355 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:07.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:07 vm03 bash[17524]: audit 2026-03-09T14:19:06.887741+0000 mon.a (mon.0) 352 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:19:07.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:07 vm03 bash[17524]: audit 2026-03-09T14:19:06.887741+0000 mon.a (mon.0) 352 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:19:07.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:07 vm03 bash[17524]: audit 2026-03-09T14:19:06.892516+0000 mon.a (mon.0) 353 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:07.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:07 vm03 bash[17524]: audit 2026-03-09T14:19:06.892516+0000 mon.a (mon.0) 353 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:07.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:07 vm03 bash[17524]: cluster 2026-03-09T14:19:06.896825+0000 mon.a (mon.0) 354 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-09T14:19:07.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:07 vm03 bash[17524]: cluster 2026-03-09T14:19:06.896825+0000 mon.a (mon.0) 354 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-09T14:19:07.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:07 vm03 bash[17524]: audit 2026-03-09T14:19:06.900224+0000 mon.a (mon.0) 355 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:07.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:07 vm03 bash[17524]: audit 2026-03-09T14:19:06.900224+0000 mon.a (mon.0) 355 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:07.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:07 vm05 bash[20070]: audit 2026-03-09T14:19:06.887741+0000 mon.a (mon.0) 352 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:19:07.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:07 vm05 bash[20070]: audit 2026-03-09T14:19:06.887741+0000 mon.a (mon.0) 352 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:19:07.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:07 vm05 bash[20070]: audit 2026-03-09T14:19:06.892516+0000 mon.a (mon.0) 353 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:07.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:07 vm05 bash[20070]: audit 2026-03-09T14:19:06.892516+0000 mon.a (mon.0) 353 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:07.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:07 vm05 bash[20070]: cluster 2026-03-09T14:19:06.896825+0000 mon.a (mon.0) 354 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-09T14:19:07.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:07 vm05 bash[20070]: cluster 2026-03-09T14:19:06.896825+0000 mon.a (mon.0) 354 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-09T14:19:07.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:07 vm05 bash[20070]: audit 2026-03-09T14:19:06.900224+0000 mon.a (mon.0) 355 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:07.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:07 vm05 bash[20070]: audit 2026-03-09T14:19:06.900224+0000 mon.a (mon.0) 355 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:08.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:08 vm04 bash[19581]: cluster 2026-03-09T14:19:06.918597+0000 mgr.x (mgr.14150) 112 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:08.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:08 vm04 bash[19581]: cluster 2026-03-09T14:19:06.918597+0000 mgr.x (mgr.14150) 112 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:08.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:08 vm05 bash[20070]: cluster 2026-03-09T14:19:06.918597+0000 mgr.x (mgr.14150) 112 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:08.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:08 vm05 bash[20070]: cluster 2026-03-09T14:19:06.918597+0000 mgr.x (mgr.14150) 112 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:08.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:08 vm03 bash[17524]: cluster 2026-03-09T14:19:06.918597+0000 mgr.x (mgr.14150) 112 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:08.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:08 vm03 bash[17524]: cluster 2026-03-09T14:19:06.918597+0000 mgr.x (mgr.14150) 112 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:10.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:10 vm04 bash[19581]: cluster 2026-03-09T14:19:08.918819+0000 mgr.x (mgr.14150) 113 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:10.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:10 vm04 bash[19581]: cluster 2026-03-09T14:19:08.918819+0000 mgr.x (mgr.14150) 113 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:10.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:10 vm05 bash[20070]: cluster 2026-03-09T14:19:08.918819+0000 mgr.x (mgr.14150) 113 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:10.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:10 vm05 bash[20070]: cluster 2026-03-09T14:19:08.918819+0000 mgr.x (mgr.14150) 113 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:10.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:10 vm03 bash[17524]: cluster 2026-03-09T14:19:08.918819+0000 mgr.x (mgr.14150) 113 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:10.557 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:10 vm03 bash[17524]: cluster 2026-03-09T14:19:08.918819+0000 mgr.x (mgr.14150) 113 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:10.585 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.b/config 2026-03-09T14:19:11.403 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:19:11.413 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph orch daemon add osd vm04:/dev/vde 2026-03-09T14:19:12.430 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:12 vm03 bash[17524]: cluster 2026-03-09T14:19:10.919065+0000 mgr.x (mgr.14150) 114 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:12.430 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:12 vm03 bash[17524]: cluster 2026-03-09T14:19:10.919065+0000 mgr.x (mgr.14150) 114 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:12.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:12 vm04 bash[19581]: cluster 2026-03-09T14:19:10.919065+0000 mgr.x (mgr.14150) 114 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:12.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:12 vm04 bash[19581]: cluster 2026-03-09T14:19:10.919065+0000 mgr.x (mgr.14150) 114 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:12.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:12 vm05 bash[20070]: cluster 2026-03-09T14:19:10.919065+0000 mgr.x (mgr.14150) 114 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:12.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:12 vm05 bash[20070]: cluster 2026-03-09T14:19:10.919065+0000 mgr.x (mgr.14150) 114 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:13.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:13 vm04 bash[19581]: cephadm 2026-03-09T14:19:12.478695+0000 mgr.x (mgr.14150) 115 : cephadm [INF] Detected new or changed devices on vm03 2026-03-09T14:19:13.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:13 vm04 bash[19581]: cephadm 2026-03-09T14:19:12.478695+0000 mgr.x (mgr.14150) 115 : cephadm [INF] Detected new or changed devices on vm03 2026-03-09T14:19:13.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:13 vm04 bash[19581]: audit 2026-03-09T14:19:12.483900+0000 mon.a (mon.0) 356 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:13.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:13 vm04 bash[19581]: audit 2026-03-09T14:19:12.483900+0000 mon.a (mon.0) 356 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:13.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:13 vm04 bash[19581]: audit 2026-03-09T14:19:12.488512+0000 mon.a (mon.0) 357 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:13.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:13 vm04 bash[19581]: audit 2026-03-09T14:19:12.488512+0000 mon.a (mon.0) 357 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:13.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:13 vm04 bash[19581]: audit 2026-03-09T14:19:12.489797+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:19:13.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:13 vm04 bash[19581]: audit 2026-03-09T14:19:12.489797+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:19:13.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:13 vm04 bash[19581]: audit 2026-03-09T14:19:12.490386+0000 mon.a (mon.0) 359 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:19:13.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:13 vm04 bash[19581]: audit 2026-03-09T14:19:12.490386+0000 mon.a (mon.0) 359 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:19:13.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:13 vm04 bash[19581]: cephadm 2026-03-09T14:19:12.490714+0000 mgr.x (mgr.14150) 116 : cephadm [INF] Adjusting osd_memory_target on vm03 to 227.8M 2026-03-09T14:19:13.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:13 vm04 bash[19581]: cephadm 2026-03-09T14:19:12.490714+0000 mgr.x (mgr.14150) 116 : cephadm [INF] Adjusting osd_memory_target on vm03 to 227.8M 2026-03-09T14:19:13.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:13 vm04 bash[19581]: cephadm 2026-03-09T14:19:12.491102+0000 mgr.x (mgr.14150) 117 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-09T14:19:13.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:13 vm04 bash[19581]: cephadm 2026-03-09T14:19:12.491102+0000 mgr.x (mgr.14150) 117 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-09T14:19:13.763 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:13 vm04 bash[19581]: audit 2026-03-09T14:19:12.491434+0000 mon.a (mon.0) 360 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:19:13.763 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:13 vm04 bash[19581]: audit 2026-03-09T14:19:12.491434+0000 mon.a (mon.0) 360 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:19:13.763 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:13 vm04 bash[19581]: audit 2026-03-09T14:19:12.491840+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:19:13.763 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:13 vm04 bash[19581]: audit 2026-03-09T14:19:12.491840+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:19:13.763 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:13 vm04 bash[19581]: audit 2026-03-09T14:19:12.496108+0000 mon.a (mon.0) 362 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:13.763 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:13 vm04 bash[19581]: audit 2026-03-09T14:19:12.496108+0000 mon.a (mon.0) 362 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:13.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:13 vm05 bash[20070]: cephadm 2026-03-09T14:19:12.478695+0000 mgr.x (mgr.14150) 115 : cephadm [INF] Detected new or changed devices on vm03 2026-03-09T14:19:13.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:13 vm05 bash[20070]: cephadm 2026-03-09T14:19:12.478695+0000 mgr.x (mgr.14150) 115 : cephadm [INF] Detected new or changed devices on vm03 2026-03-09T14:19:13.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:13 vm05 bash[20070]: audit 2026-03-09T14:19:12.483900+0000 mon.a (mon.0) 356 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:13.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:13 vm05 bash[20070]: audit 2026-03-09T14:19:12.483900+0000 mon.a (mon.0) 356 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:13.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:13 vm05 bash[20070]: audit 2026-03-09T14:19:12.488512+0000 mon.a (mon.0) 357 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:13.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:13 vm05 bash[20070]: audit 2026-03-09T14:19:12.488512+0000 mon.a (mon.0) 357 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:13.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:13 vm05 bash[20070]: audit 2026-03-09T14:19:12.489797+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:19:13.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:13 vm05 bash[20070]: audit 2026-03-09T14:19:12.489797+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:19:13.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:13 vm05 bash[20070]: audit 2026-03-09T14:19:12.490386+0000 mon.a (mon.0) 359 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:19:13.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:13 vm05 bash[20070]: audit 2026-03-09T14:19:12.490386+0000 mon.a (mon.0) 359 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:19:13.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:13 vm05 bash[20070]: cephadm 2026-03-09T14:19:12.490714+0000 mgr.x (mgr.14150) 116 : cephadm [INF] Adjusting osd_memory_target on vm03 to 227.8M 2026-03-09T14:19:13.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:13 vm05 bash[20070]: cephadm 2026-03-09T14:19:12.490714+0000 mgr.x (mgr.14150) 116 : cephadm [INF] Adjusting osd_memory_target on vm03 to 227.8M 2026-03-09T14:19:13.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:13 vm05 bash[20070]: cephadm 2026-03-09T14:19:12.491102+0000 mgr.x (mgr.14150) 117 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-09T14:19:13.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:13 vm05 bash[20070]: cephadm 2026-03-09T14:19:12.491102+0000 mgr.x (mgr.14150) 117 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-09T14:19:13.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:13 vm05 bash[20070]: audit 2026-03-09T14:19:12.491434+0000 mon.a (mon.0) 360 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:19:13.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:13 vm05 bash[20070]: audit 2026-03-09T14:19:12.491434+0000 mon.a (mon.0) 360 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:19:13.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:13 vm05 bash[20070]: audit 2026-03-09T14:19:12.491840+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:19:13.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:13 vm05 bash[20070]: audit 2026-03-09T14:19:12.491840+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:19:13.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:13 vm05 bash[20070]: audit 2026-03-09T14:19:12.496108+0000 mon.a (mon.0) 362 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:13.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:13 vm05 bash[20070]: audit 2026-03-09T14:19:12.496108+0000 mon.a (mon.0) 362 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:13.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:13 vm03 bash[17524]: cephadm 2026-03-09T14:19:12.478695+0000 mgr.x (mgr.14150) 115 : cephadm [INF] Detected new or changed devices on vm03 2026-03-09T14:19:13.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:13 vm03 bash[17524]: cephadm 2026-03-09T14:19:12.478695+0000 mgr.x (mgr.14150) 115 : cephadm [INF] Detected new or changed devices on vm03 2026-03-09T14:19:13.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:13 vm03 bash[17524]: audit 2026-03-09T14:19:12.483900+0000 mon.a (mon.0) 356 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:13.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:13 vm03 bash[17524]: audit 2026-03-09T14:19:12.483900+0000 mon.a (mon.0) 356 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:13.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:13 vm03 bash[17524]: audit 2026-03-09T14:19:12.488512+0000 mon.a (mon.0) 357 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:13.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:13 vm03 bash[17524]: audit 2026-03-09T14:19:12.488512+0000 mon.a (mon.0) 357 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:13.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:13 vm03 bash[17524]: audit 2026-03-09T14:19:12.489797+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:19:13.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:13 vm03 bash[17524]: audit 2026-03-09T14:19:12.489797+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:19:13.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:13 vm03 bash[17524]: audit 2026-03-09T14:19:12.490386+0000 mon.a (mon.0) 359 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:19:13.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:13 vm03 bash[17524]: audit 2026-03-09T14:19:12.490386+0000 mon.a (mon.0) 359 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:19:13.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:13 vm03 bash[17524]: cephadm 2026-03-09T14:19:12.490714+0000 mgr.x (mgr.14150) 116 : cephadm [INF] Adjusting osd_memory_target on vm03 to 227.8M 2026-03-09T14:19:13.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:13 vm03 bash[17524]: cephadm 2026-03-09T14:19:12.490714+0000 mgr.x (mgr.14150) 116 : cephadm [INF] Adjusting osd_memory_target on vm03 to 227.8M 2026-03-09T14:19:13.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:13 vm03 bash[17524]: cephadm 2026-03-09T14:19:12.491102+0000 mgr.x (mgr.14150) 117 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-09T14:19:13.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:13 vm03 bash[17524]: cephadm 2026-03-09T14:19:12.491102+0000 mgr.x (mgr.14150) 117 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-09T14:19:13.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:13 vm03 bash[17524]: audit 2026-03-09T14:19:12.491434+0000 mon.a (mon.0) 360 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:19:13.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:13 vm03 bash[17524]: audit 2026-03-09T14:19:12.491434+0000 mon.a (mon.0) 360 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:19:13.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:13 vm03 bash[17524]: audit 2026-03-09T14:19:12.491840+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:19:13.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:13 vm03 bash[17524]: audit 2026-03-09T14:19:12.491840+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:19:13.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:13 vm03 bash[17524]: audit 2026-03-09T14:19:12.496108+0000 mon.a (mon.0) 362 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:13.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:13 vm03 bash[17524]: audit 2026-03-09T14:19:12.496108+0000 mon.a (mon.0) 362 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:14.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:14 vm04 bash[19581]: cluster 2026-03-09T14:19:12.919326+0000 mgr.x (mgr.14150) 118 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:14.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:14 vm04 bash[19581]: cluster 2026-03-09T14:19:12.919326+0000 mgr.x (mgr.14150) 118 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:14.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:14 vm05 bash[20070]: cluster 2026-03-09T14:19:12.919326+0000 mgr.x (mgr.14150) 118 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:14.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:14 vm05 bash[20070]: cluster 2026-03-09T14:19:12.919326+0000 mgr.x (mgr.14150) 118 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:14.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:14 vm03 bash[17524]: cluster 2026-03-09T14:19:12.919326+0000 mgr.x (mgr.14150) 118 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:14.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:14 vm03 bash[17524]: cluster 2026-03-09T14:19:12.919326+0000 mgr.x (mgr.14150) 118 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:15.020 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.b/config 2026-03-09T14:19:15.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:15 vm04 bash[19581]: audit 2026-03-09T14:19:15.261715+0000 mon.a (mon.0) 363 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:19:15.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:15 vm04 bash[19581]: audit 2026-03-09T14:19:15.261715+0000 mon.a (mon.0) 363 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:19:15.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:15 vm04 bash[19581]: audit 2026-03-09T14:19:15.263014+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:19:15.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:15 vm04 bash[19581]: audit 2026-03-09T14:19:15.263014+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:19:15.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:15 vm04 bash[19581]: audit 2026-03-09T14:19:15.263460+0000 mon.a (mon.0) 365 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:19:15.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:15 vm04 bash[19581]: audit 2026-03-09T14:19:15.263460+0000 mon.a (mon.0) 365 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:19:15.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:15 vm05 bash[20070]: audit 2026-03-09T14:19:15.261715+0000 mon.a (mon.0) 363 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:19:15.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:15 vm05 bash[20070]: audit 2026-03-09T14:19:15.261715+0000 mon.a (mon.0) 363 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:19:15.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:15 vm05 bash[20070]: audit 2026-03-09T14:19:15.263014+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:19:15.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:15 vm05 bash[20070]: audit 2026-03-09T14:19:15.263014+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:19:15.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:15 vm05 bash[20070]: audit 2026-03-09T14:19:15.263460+0000 mon.a (mon.0) 365 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:19:15.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:15 vm05 bash[20070]: audit 2026-03-09T14:19:15.263460+0000 mon.a (mon.0) 365 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:19:15.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:15 vm03 bash[17524]: audit 2026-03-09T14:19:15.261715+0000 mon.a (mon.0) 363 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:19:15.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:15 vm03 bash[17524]: audit 2026-03-09T14:19:15.261715+0000 mon.a (mon.0) 363 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:19:15.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:15 vm03 bash[17524]: audit 2026-03-09T14:19:15.263014+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:19:15.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:15 vm03 bash[17524]: audit 2026-03-09T14:19:15.263014+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:19:15.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:15 vm03 bash[17524]: audit 2026-03-09T14:19:15.263460+0000 mon.a (mon.0) 365 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:19:15.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:15 vm03 bash[17524]: audit 2026-03-09T14:19:15.263460+0000 mon.a (mon.0) 365 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:19:16.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:16 vm04 bash[19581]: cluster 2026-03-09T14:19:14.919519+0000 mgr.x (mgr.14150) 119 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:16.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:16 vm04 bash[19581]: cluster 2026-03-09T14:19:14.919519+0000 mgr.x (mgr.14150) 119 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:16.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:16 vm04 bash[19581]: audit 2026-03-09T14:19:15.260374+0000 mgr.x (mgr.14150) 120 : audit [DBG] from='client.24158 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:19:16.762 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:16 vm04 bash[19581]: audit 2026-03-09T14:19:15.260374+0000 mgr.x (mgr.14150) 120 : audit [DBG] from='client.24158 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:19:16.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:16 vm05 bash[20070]: cluster 2026-03-09T14:19:14.919519+0000 mgr.x (mgr.14150) 119 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:16.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:16 vm05 bash[20070]: cluster 2026-03-09T14:19:14.919519+0000 mgr.x (mgr.14150) 119 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:16.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:16 vm05 bash[20070]: audit 2026-03-09T14:19:15.260374+0000 mgr.x (mgr.14150) 120 : audit [DBG] from='client.24158 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:19:16.763 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:16 vm05 bash[20070]: audit 2026-03-09T14:19:15.260374+0000 mgr.x (mgr.14150) 120 : audit [DBG] from='client.24158 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:19:16.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:16 vm03 bash[17524]: cluster 2026-03-09T14:19:14.919519+0000 mgr.x (mgr.14150) 119 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:16.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:16 vm03 bash[17524]: cluster 2026-03-09T14:19:14.919519+0000 mgr.x (mgr.14150) 119 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:16.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:16 vm03 bash[17524]: audit 2026-03-09T14:19:15.260374+0000 mgr.x (mgr.14150) 120 : audit [DBG] from='client.24158 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:19:16.807 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:16 vm03 bash[17524]: audit 2026-03-09T14:19:15.260374+0000 mgr.x (mgr.14150) 120 : audit [DBG] from='client.24158 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:19:18.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:17 vm04 bash[19581]: cluster 2026-03-09T14:19:16.919743+0000 mgr.x (mgr.14150) 121 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:18.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:17 vm04 bash[19581]: cluster 2026-03-09T14:19:16.919743+0000 mgr.x (mgr.14150) 121 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:18.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:17 vm05 bash[20070]: cluster 2026-03-09T14:19:16.919743+0000 mgr.x (mgr.14150) 121 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:18.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:17 vm05 bash[20070]: cluster 2026-03-09T14:19:16.919743+0000 mgr.x (mgr.14150) 121 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:18.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:17 vm03 bash[17524]: cluster 2026-03-09T14:19:16.919743+0000 mgr.x (mgr.14150) 121 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:18.057 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:17 vm03 bash[17524]: cluster 2026-03-09T14:19:16.919743+0000 mgr.x (mgr.14150) 121 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:20.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:19 vm04 bash[19581]: cluster 2026-03-09T14:19:18.919975+0000 mgr.x (mgr.14150) 122 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:20.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:19 vm04 bash[19581]: cluster 2026-03-09T14:19:18.919975+0000 mgr.x (mgr.14150) 122 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:20.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:19 vm04 bash[19581]: audit 2026-03-09T14:19:19.643103+0000 mon.b (mon.2) 6 : audit [INF] from='client.? 192.168.123.104:0/1126876108' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f76cddf6-4356-443b-8d69-5d0e6d8a3803"}]: dispatch 2026-03-09T14:19:20.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:19 vm04 bash[19581]: audit 2026-03-09T14:19:19.643103+0000 mon.b (mon.2) 6 : audit [INF] from='client.? 192.168.123.104:0/1126876108' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f76cddf6-4356-443b-8d69-5d0e6d8a3803"}]: dispatch 2026-03-09T14:19:20.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:19 vm04 bash[19581]: audit 2026-03-09T14:19:19.644627+0000 mon.a (mon.0) 366 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f76cddf6-4356-443b-8d69-5d0e6d8a3803"}]: dispatch 2026-03-09T14:19:20.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:19 vm04 bash[19581]: audit 2026-03-09T14:19:19.644627+0000 mon.a (mon.0) 366 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f76cddf6-4356-443b-8d69-5d0e6d8a3803"}]: dispatch 2026-03-09T14:19:20.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:19 vm04 bash[19581]: audit 2026-03-09T14:19:19.647457+0000 mon.a (mon.0) 367 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f76cddf6-4356-443b-8d69-5d0e6d8a3803"}]': finished 2026-03-09T14:19:20.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:19 vm04 bash[19581]: audit 2026-03-09T14:19:19.647457+0000 mon.a (mon.0) 367 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f76cddf6-4356-443b-8d69-5d0e6d8a3803"}]': finished 2026-03-09T14:19:20.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:19 vm04 bash[19581]: cluster 2026-03-09T14:19:19.650270+0000 mon.a (mon.0) 368 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-09T14:19:20.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:19 vm04 bash[19581]: cluster 2026-03-09T14:19:19.650270+0000 mon.a (mon.0) 368 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-09T14:19:20.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:19 vm04 bash[19581]: audit 2026-03-09T14:19:19.650546+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:19:20.012 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:19 vm04 bash[19581]: audit 2026-03-09T14:19:19.650546+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:19:20.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:19 vm05 bash[20070]: cluster 2026-03-09T14:19:18.919975+0000 mgr.x (mgr.14150) 122 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:20.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:19 vm05 bash[20070]: cluster 2026-03-09T14:19:18.919975+0000 mgr.x (mgr.14150) 122 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:20.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:19 vm05 bash[20070]: audit 2026-03-09T14:19:19.643103+0000 mon.b (mon.2) 6 : audit [INF] from='client.? 192.168.123.104:0/1126876108' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f76cddf6-4356-443b-8d69-5d0e6d8a3803"}]: dispatch 2026-03-09T14:19:20.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:19 vm05 bash[20070]: audit 2026-03-09T14:19:19.643103+0000 mon.b (mon.2) 6 : audit [INF] from='client.? 192.168.123.104:0/1126876108' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f76cddf6-4356-443b-8d69-5d0e6d8a3803"}]: dispatch 2026-03-09T14:19:20.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:19 vm05 bash[20070]: audit 2026-03-09T14:19:19.644627+0000 mon.a (mon.0) 366 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f76cddf6-4356-443b-8d69-5d0e6d8a3803"}]: dispatch 2026-03-09T14:19:20.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:19 vm05 bash[20070]: audit 2026-03-09T14:19:19.644627+0000 mon.a (mon.0) 366 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f76cddf6-4356-443b-8d69-5d0e6d8a3803"}]: dispatch 2026-03-09T14:19:20.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:19 vm05 bash[20070]: audit 2026-03-09T14:19:19.647457+0000 mon.a (mon.0) 367 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f76cddf6-4356-443b-8d69-5d0e6d8a3803"}]': finished 2026-03-09T14:19:20.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:19 vm05 bash[20070]: audit 2026-03-09T14:19:19.647457+0000 mon.a (mon.0) 367 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f76cddf6-4356-443b-8d69-5d0e6d8a3803"}]': finished 2026-03-09T14:19:20.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:19 vm05 bash[20070]: cluster 2026-03-09T14:19:19.650270+0000 mon.a (mon.0) 368 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-09T14:19:20.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:19 vm05 bash[20070]: cluster 2026-03-09T14:19:19.650270+0000 mon.a (mon.0) 368 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-09T14:19:20.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:19 vm05 bash[20070]: audit 2026-03-09T14:19:19.650546+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:19:20.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:19 vm05 bash[20070]: audit 2026-03-09T14:19:19.650546+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:19:20.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:19 vm03 bash[17524]: cluster 2026-03-09T14:19:18.919975+0000 mgr.x (mgr.14150) 122 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:20.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:19 vm03 bash[17524]: cluster 2026-03-09T14:19:18.919975+0000 mgr.x (mgr.14150) 122 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:20.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:19 vm03 bash[17524]: audit 2026-03-09T14:19:19.643103+0000 mon.b (mon.2) 6 : audit [INF] from='client.? 192.168.123.104:0/1126876108' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f76cddf6-4356-443b-8d69-5d0e6d8a3803"}]: dispatch 2026-03-09T14:19:20.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:19 vm03 bash[17524]: audit 2026-03-09T14:19:19.643103+0000 mon.b (mon.2) 6 : audit [INF] from='client.? 192.168.123.104:0/1126876108' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f76cddf6-4356-443b-8d69-5d0e6d8a3803"}]: dispatch 2026-03-09T14:19:20.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:19 vm03 bash[17524]: audit 2026-03-09T14:19:19.644627+0000 mon.a (mon.0) 366 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f76cddf6-4356-443b-8d69-5d0e6d8a3803"}]: dispatch 2026-03-09T14:19:20.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:19 vm03 bash[17524]: audit 2026-03-09T14:19:19.644627+0000 mon.a (mon.0) 366 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f76cddf6-4356-443b-8d69-5d0e6d8a3803"}]: dispatch 2026-03-09T14:19:20.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:19 vm03 bash[17524]: audit 2026-03-09T14:19:19.647457+0000 mon.a (mon.0) 367 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f76cddf6-4356-443b-8d69-5d0e6d8a3803"}]': finished 2026-03-09T14:19:20.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:19 vm03 bash[17524]: audit 2026-03-09T14:19:19.647457+0000 mon.a (mon.0) 367 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f76cddf6-4356-443b-8d69-5d0e6d8a3803"}]': finished 2026-03-09T14:19:20.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:19 vm03 bash[17524]: cluster 2026-03-09T14:19:19.650270+0000 mon.a (mon.0) 368 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-09T14:19:20.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:19 vm03 bash[17524]: cluster 2026-03-09T14:19:19.650270+0000 mon.a (mon.0) 368 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-09T14:19:20.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:19 vm03 bash[17524]: audit 2026-03-09T14:19:19.650546+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:19:20.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:19 vm03 bash[17524]: audit 2026-03-09T14:19:19.650546+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:19:21.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:20 vm04 bash[19581]: audit 2026-03-09T14:19:20.223345+0000 mon.b (mon.2) 7 : audit [DBG] from='client.? 192.168.123.104:0/2470857497' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:19:21.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:20 vm04 bash[19581]: audit 2026-03-09T14:19:20.223345+0000 mon.b (mon.2) 7 : audit [DBG] from='client.? 192.168.123.104:0/2470857497' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:19:21.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:20 vm05 bash[20070]: audit 2026-03-09T14:19:20.223345+0000 mon.b (mon.2) 7 : audit [DBG] from='client.? 192.168.123.104:0/2470857497' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:19:21.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:20 vm05 bash[20070]: audit 2026-03-09T14:19:20.223345+0000 mon.b (mon.2) 7 : audit [DBG] from='client.? 192.168.123.104:0/2470857497' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:19:21.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:20 vm03 bash[17524]: audit 2026-03-09T14:19:20.223345+0000 mon.b (mon.2) 7 : audit [DBG] from='client.? 192.168.123.104:0/2470857497' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:19:21.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:20 vm03 bash[17524]: audit 2026-03-09T14:19:20.223345+0000 mon.b (mon.2) 7 : audit [DBG] from='client.? 192.168.123.104:0/2470857497' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:19:22.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:21 vm04 bash[19581]: cluster 2026-03-09T14:19:20.920231+0000 mgr.x (mgr.14150) 123 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:22.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:21 vm04 bash[19581]: cluster 2026-03-09T14:19:20.920231+0000 mgr.x (mgr.14150) 123 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:22.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:21 vm05 bash[20070]: cluster 2026-03-09T14:19:20.920231+0000 mgr.x (mgr.14150) 123 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:22.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:21 vm05 bash[20070]: cluster 2026-03-09T14:19:20.920231+0000 mgr.x (mgr.14150) 123 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:22.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:21 vm03 bash[17524]: cluster 2026-03-09T14:19:20.920231+0000 mgr.x (mgr.14150) 123 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:22.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:21 vm03 bash[17524]: cluster 2026-03-09T14:19:20.920231+0000 mgr.x (mgr.14150) 123 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:24.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:23 vm04 bash[19581]: cluster 2026-03-09T14:19:22.920486+0000 mgr.x (mgr.14150) 124 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:24.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:23 vm04 bash[19581]: cluster 2026-03-09T14:19:22.920486+0000 mgr.x (mgr.14150) 124 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:24.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:23 vm05 bash[20070]: cluster 2026-03-09T14:19:22.920486+0000 mgr.x (mgr.14150) 124 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:24.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:23 vm05 bash[20070]: cluster 2026-03-09T14:19:22.920486+0000 mgr.x (mgr.14150) 124 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:24.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:23 vm03 bash[17524]: cluster 2026-03-09T14:19:22.920486+0000 mgr.x (mgr.14150) 124 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:24.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:23 vm03 bash[17524]: cluster 2026-03-09T14:19:22.920486+0000 mgr.x (mgr.14150) 124 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:26.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:25 vm04 bash[19581]: cluster 2026-03-09T14:19:24.920698+0000 mgr.x (mgr.14150) 125 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:26.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:25 vm04 bash[19581]: cluster 2026-03-09T14:19:24.920698+0000 mgr.x (mgr.14150) 125 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:26.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:25 vm05 bash[20070]: cluster 2026-03-09T14:19:24.920698+0000 mgr.x (mgr.14150) 125 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:26.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:25 vm05 bash[20070]: cluster 2026-03-09T14:19:24.920698+0000 mgr.x (mgr.14150) 125 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:26.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:25 vm03 bash[17524]: cluster 2026-03-09T14:19:24.920698+0000 mgr.x (mgr.14150) 125 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:26.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:25 vm03 bash[17524]: cluster 2026-03-09T14:19:24.920698+0000 mgr.x (mgr.14150) 125 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:28.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:27 vm04 bash[19581]: cluster 2026-03-09T14:19:26.920916+0000 mgr.x (mgr.14150) 126 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:28.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:27 vm04 bash[19581]: cluster 2026-03-09T14:19:26.920916+0000 mgr.x (mgr.14150) 126 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:28.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:27 vm05 bash[20070]: cluster 2026-03-09T14:19:26.920916+0000 mgr.x (mgr.14150) 126 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:28.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:27 vm05 bash[20070]: cluster 2026-03-09T14:19:26.920916+0000 mgr.x (mgr.14150) 126 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:28.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:28 vm03 bash[17524]: cluster 2026-03-09T14:19:26.920916+0000 mgr.x (mgr.14150) 126 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:28.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:28 vm03 bash[17524]: cluster 2026-03-09T14:19:26.920916+0000 mgr.x (mgr.14150) 126 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:29.121 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:29 vm04 bash[19581]: audit 2026-03-09T14:19:28.369908+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T14:19:29.121 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:29 vm04 bash[19581]: audit 2026-03-09T14:19:28.369908+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T14:19:29.121 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:29 vm04 bash[19581]: audit 2026-03-09T14:19:28.370424+0000 mon.a (mon.0) 371 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:19:29.121 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:29 vm04 bash[19581]: audit 2026-03-09T14:19:28.370424+0000 mon.a (mon.0) 371 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:19:29.121 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:29 vm04 bash[19581]: cephadm 2026-03-09T14:19:28.370818+0000 mgr.x (mgr.14150) 127 : cephadm [INF] Deploying daemon osd.2 on vm04 2026-03-09T14:19:29.121 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:29 vm04 bash[19581]: cephadm 2026-03-09T14:19:28.370818+0000 mgr.x (mgr.14150) 127 : cephadm [INF] Deploying daemon osd.2 on vm04 2026-03-09T14:19:29.121 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:29 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:19:29.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:29 vm05 bash[20070]: audit 2026-03-09T14:19:28.369908+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T14:19:29.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:29 vm05 bash[20070]: audit 2026-03-09T14:19:28.369908+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T14:19:29.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:29 vm05 bash[20070]: audit 2026-03-09T14:19:28.370424+0000 mon.a (mon.0) 371 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:19:29.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:29 vm05 bash[20070]: audit 2026-03-09T14:19:28.370424+0000 mon.a (mon.0) 371 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:19:29.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:29 vm05 bash[20070]: cephadm 2026-03-09T14:19:28.370818+0000 mgr.x (mgr.14150) 127 : cephadm [INF] Deploying daemon osd.2 on vm04 2026-03-09T14:19:29.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:29 vm05 bash[20070]: cephadm 2026-03-09T14:19:28.370818+0000 mgr.x (mgr.14150) 127 : cephadm [INF] Deploying daemon osd.2 on vm04 2026-03-09T14:19:29.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:29 vm03 bash[17524]: audit 2026-03-09T14:19:28.369908+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T14:19:29.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:29 vm03 bash[17524]: audit 2026-03-09T14:19:28.369908+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T14:19:29.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:29 vm03 bash[17524]: audit 2026-03-09T14:19:28.370424+0000 mon.a (mon.0) 371 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:19:29.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:29 vm03 bash[17524]: audit 2026-03-09T14:19:28.370424+0000 mon.a (mon.0) 371 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:19:29.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:29 vm03 bash[17524]: cephadm 2026-03-09T14:19:28.370818+0000 mgr.x (mgr.14150) 127 : cephadm [INF] Deploying daemon osd.2 on vm04 2026-03-09T14:19:29.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:29 vm03 bash[17524]: cephadm 2026-03-09T14:19:28.370818+0000 mgr.x (mgr.14150) 127 : cephadm [INF] Deploying daemon osd.2 on vm04 2026-03-09T14:19:29.399 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:29 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:19:30.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:30 vm05 bash[20070]: cluster 2026-03-09T14:19:28.921165+0000 mgr.x (mgr.14150) 128 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:30.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:30 vm05 bash[20070]: cluster 2026-03-09T14:19:28.921165+0000 mgr.x (mgr.14150) 128 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:30.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:30 vm05 bash[20070]: audit 2026-03-09T14:19:29.325761+0000 mon.a (mon.0) 372 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:19:30.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:30 vm05 bash[20070]: audit 2026-03-09T14:19:29.325761+0000 mon.a (mon.0) 372 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:19:30.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:30 vm05 bash[20070]: audit 2026-03-09T14:19:29.330206+0000 mon.a (mon.0) 373 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:30.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:30 vm05 bash[20070]: audit 2026-03-09T14:19:29.330206+0000 mon.a (mon.0) 373 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:30.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:30 vm05 bash[20070]: audit 2026-03-09T14:19:29.334086+0000 mon.a (mon.0) 374 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:30.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:30 vm05 bash[20070]: audit 2026-03-09T14:19:29.334086+0000 mon.a (mon.0) 374 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:30.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:30 vm03 bash[17524]: cluster 2026-03-09T14:19:28.921165+0000 mgr.x (mgr.14150) 128 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:30.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:30 vm03 bash[17524]: cluster 2026-03-09T14:19:28.921165+0000 mgr.x (mgr.14150) 128 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:30.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:30 vm03 bash[17524]: audit 2026-03-09T14:19:29.325761+0000 mon.a (mon.0) 372 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:19:30.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:30 vm03 bash[17524]: audit 2026-03-09T14:19:29.325761+0000 mon.a (mon.0) 372 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:19:30.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:30 vm03 bash[17524]: audit 2026-03-09T14:19:29.330206+0000 mon.a (mon.0) 373 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:30.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:30 vm03 bash[17524]: audit 2026-03-09T14:19:29.330206+0000 mon.a (mon.0) 373 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:30.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:30 vm03 bash[17524]: audit 2026-03-09T14:19:29.334086+0000 mon.a (mon.0) 374 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:30.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:30 vm03 bash[17524]: audit 2026-03-09T14:19:29.334086+0000 mon.a (mon.0) 374 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:30.429 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:30 vm04 bash[19581]: cluster 2026-03-09T14:19:28.921165+0000 mgr.x (mgr.14150) 128 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:30.429 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:30 vm04 bash[19581]: cluster 2026-03-09T14:19:28.921165+0000 mgr.x (mgr.14150) 128 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:30.429 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:30 vm04 bash[19581]: audit 2026-03-09T14:19:29.325761+0000 mon.a (mon.0) 372 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:19:30.429 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:30 vm04 bash[19581]: audit 2026-03-09T14:19:29.325761+0000 mon.a (mon.0) 372 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:19:30.429 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:30 vm04 bash[19581]: audit 2026-03-09T14:19:29.330206+0000 mon.a (mon.0) 373 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:30.429 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:30 vm04 bash[19581]: audit 2026-03-09T14:19:29.330206+0000 mon.a (mon.0) 373 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:30.429 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:30 vm04 bash[19581]: audit 2026-03-09T14:19:29.334086+0000 mon.a (mon.0) 374 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:30.429 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:30 vm04 bash[19581]: audit 2026-03-09T14:19:29.334086+0000 mon.a (mon.0) 374 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:32.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:32 vm04 bash[19581]: cluster 2026-03-09T14:19:30.921403+0000 mgr.x (mgr.14150) 129 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:32.262 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:32 vm04 bash[19581]: cluster 2026-03-09T14:19:30.921403+0000 mgr.x (mgr.14150) 129 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:32.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:32 vm03 bash[17524]: cluster 2026-03-09T14:19:30.921403+0000 mgr.x (mgr.14150) 129 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:32.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:32 vm03 bash[17524]: cluster 2026-03-09T14:19:30.921403+0000 mgr.x (mgr.14150) 129 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:32.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:32 vm05 bash[20070]: cluster 2026-03-09T14:19:30.921403+0000 mgr.x (mgr.14150) 129 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:32.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:32 vm05 bash[20070]: cluster 2026-03-09T14:19:30.921403+0000 mgr.x (mgr.14150) 129 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:34.306 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:34 vm03 bash[17524]: cluster 2026-03-09T14:19:32.921625+0000 mgr.x (mgr.14150) 130 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:34.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:34 vm03 bash[17524]: cluster 2026-03-09T14:19:32.921625+0000 mgr.x (mgr.14150) 130 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:34.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:34 vm03 bash[17524]: audit 2026-03-09T14:19:33.066913+0000 mon.b (mon.2) 8 : audit [INF] from='osd.2 [v2:192.168.123.104:6800/1899064825,v1:192.168.123.104:6801/1899064825]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T14:19:34.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:34 vm03 bash[17524]: audit 2026-03-09T14:19:33.066913+0000 mon.b (mon.2) 8 : audit [INF] from='osd.2 [v2:192.168.123.104:6800/1899064825,v1:192.168.123.104:6801/1899064825]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T14:19:34.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:34 vm03 bash[17524]: audit 2026-03-09T14:19:33.068271+0000 mon.a (mon.0) 375 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T14:19:34.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:34 vm03 bash[17524]: audit 2026-03-09T14:19:33.068271+0000 mon.a (mon.0) 375 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T14:19:34.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:34 vm04 bash[19581]: cluster 2026-03-09T14:19:32.921625+0000 mgr.x (mgr.14150) 130 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:34.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:34 vm04 bash[19581]: cluster 2026-03-09T14:19:32.921625+0000 mgr.x (mgr.14150) 130 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:34.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:34 vm04 bash[19581]: audit 2026-03-09T14:19:33.066913+0000 mon.b (mon.2) 8 : audit [INF] from='osd.2 [v2:192.168.123.104:6800/1899064825,v1:192.168.123.104:6801/1899064825]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T14:19:34.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:34 vm04 bash[19581]: audit 2026-03-09T14:19:33.066913+0000 mon.b (mon.2) 8 : audit [INF] from='osd.2 [v2:192.168.123.104:6800/1899064825,v1:192.168.123.104:6801/1899064825]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T14:19:34.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:34 vm04 bash[19581]: audit 2026-03-09T14:19:33.068271+0000 mon.a (mon.0) 375 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T14:19:34.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:34 vm04 bash[19581]: audit 2026-03-09T14:19:33.068271+0000 mon.a (mon.0) 375 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T14:19:34.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:34 vm05 bash[20070]: cluster 2026-03-09T14:19:32.921625+0000 mgr.x (mgr.14150) 130 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:34.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:34 vm05 bash[20070]: cluster 2026-03-09T14:19:32.921625+0000 mgr.x (mgr.14150) 130 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:34.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:34 vm05 bash[20070]: audit 2026-03-09T14:19:33.066913+0000 mon.b (mon.2) 8 : audit [INF] from='osd.2 [v2:192.168.123.104:6800/1899064825,v1:192.168.123.104:6801/1899064825]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T14:19:34.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:34 vm05 bash[20070]: audit 2026-03-09T14:19:33.066913+0000 mon.b (mon.2) 8 : audit [INF] from='osd.2 [v2:192.168.123.104:6800/1899064825,v1:192.168.123.104:6801/1899064825]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T14:19:34.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:34 vm05 bash[20070]: audit 2026-03-09T14:19:33.068271+0000 mon.a (mon.0) 375 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T14:19:34.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:34 vm05 bash[20070]: audit 2026-03-09T14:19:33.068271+0000 mon.a (mon.0) 375 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T14:19:35.306 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:35 vm03 bash[17524]: audit 2026-03-09T14:19:34.027080+0000 mon.a (mon.0) 376 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T14:19:35.306 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:35 vm03 bash[17524]: audit 2026-03-09T14:19:34.027080+0000 mon.a (mon.0) 376 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T14:19:35.306 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:35 vm03 bash[17524]: audit 2026-03-09T14:19:34.028815+0000 mon.b (mon.2) 9 : audit [INF] from='osd.2 [v2:192.168.123.104:6800/1899064825,v1:192.168.123.104:6801/1899064825]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:19:35.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:35 vm03 bash[17524]: audit 2026-03-09T14:19:34.028815+0000 mon.b (mon.2) 9 : audit [INF] from='osd.2 [v2:192.168.123.104:6800/1899064825,v1:192.168.123.104:6801/1899064825]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:19:35.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:35 vm03 bash[17524]: cluster 2026-03-09T14:19:34.029234+0000 mon.a (mon.0) 377 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-09T14:19:35.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:35 vm03 bash[17524]: cluster 2026-03-09T14:19:34.029234+0000 mon.a (mon.0) 377 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-09T14:19:35.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:35 vm03 bash[17524]: audit 2026-03-09T14:19:34.029399+0000 mon.a (mon.0) 378 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:19:35.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:35 vm03 bash[17524]: audit 2026-03-09T14:19:34.029399+0000 mon.a (mon.0) 378 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:19:35.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:35 vm03 bash[17524]: audit 2026-03-09T14:19:34.030273+0000 mon.a (mon.0) 379 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:19:35.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:35 vm03 bash[17524]: audit 2026-03-09T14:19:34.030273+0000 mon.a (mon.0) 379 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:19:35.311 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:35 vm04 bash[19581]: audit 2026-03-09T14:19:34.027080+0000 mon.a (mon.0) 376 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T14:19:35.311 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:35 vm04 bash[19581]: audit 2026-03-09T14:19:34.027080+0000 mon.a (mon.0) 376 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T14:19:35.311 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:35 vm04 bash[19581]: audit 2026-03-09T14:19:34.028815+0000 mon.b (mon.2) 9 : audit [INF] from='osd.2 [v2:192.168.123.104:6800/1899064825,v1:192.168.123.104:6801/1899064825]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:19:35.311 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:35 vm04 bash[19581]: audit 2026-03-09T14:19:34.028815+0000 mon.b (mon.2) 9 : audit [INF] from='osd.2 [v2:192.168.123.104:6800/1899064825,v1:192.168.123.104:6801/1899064825]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:19:35.311 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:35 vm04 bash[19581]: cluster 2026-03-09T14:19:34.029234+0000 mon.a (mon.0) 377 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-09T14:19:35.311 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:35 vm04 bash[19581]: cluster 2026-03-09T14:19:34.029234+0000 mon.a (mon.0) 377 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-09T14:19:35.311 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:35 vm04 bash[19581]: audit 2026-03-09T14:19:34.029399+0000 mon.a (mon.0) 378 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:19:35.311 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:35 vm04 bash[19581]: audit 2026-03-09T14:19:34.029399+0000 mon.a (mon.0) 378 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:19:35.311 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:35 vm04 bash[19581]: audit 2026-03-09T14:19:34.030273+0000 mon.a (mon.0) 379 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:19:35.311 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:35 vm04 bash[19581]: audit 2026-03-09T14:19:34.030273+0000 mon.a (mon.0) 379 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:19:35.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:35 vm05 bash[20070]: audit 2026-03-09T14:19:34.027080+0000 mon.a (mon.0) 376 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T14:19:35.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:35 vm05 bash[20070]: audit 2026-03-09T14:19:34.027080+0000 mon.a (mon.0) 376 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T14:19:35.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:35 vm05 bash[20070]: audit 2026-03-09T14:19:34.028815+0000 mon.b (mon.2) 9 : audit [INF] from='osd.2 [v2:192.168.123.104:6800/1899064825,v1:192.168.123.104:6801/1899064825]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:19:35.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:35 vm05 bash[20070]: audit 2026-03-09T14:19:34.028815+0000 mon.b (mon.2) 9 : audit [INF] from='osd.2 [v2:192.168.123.104:6800/1899064825,v1:192.168.123.104:6801/1899064825]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:19:35.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:35 vm05 bash[20070]: cluster 2026-03-09T14:19:34.029234+0000 mon.a (mon.0) 377 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-09T14:19:35.514 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:35 vm05 bash[20070]: cluster 2026-03-09T14:19:34.029234+0000 mon.a (mon.0) 377 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-09T14:19:35.514 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:35 vm05 bash[20070]: audit 2026-03-09T14:19:34.029399+0000 mon.a (mon.0) 378 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:19:35.514 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:35 vm05 bash[20070]: audit 2026-03-09T14:19:34.029399+0000 mon.a (mon.0) 378 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:19:35.514 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:35 vm05 bash[20070]: audit 2026-03-09T14:19:34.030273+0000 mon.a (mon.0) 379 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:19:35.514 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:35 vm05 bash[20070]: audit 2026-03-09T14:19:34.030273+0000 mon.a (mon.0) 379 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:19:36.306 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:36 vm03 bash[17524]: cluster 2026-03-09T14:19:34.921835+0000 mgr.x (mgr.14150) 131 : cluster [DBG] pgmap v84: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:36.306 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:36 vm03 bash[17524]: cluster 2026-03-09T14:19:34.921835+0000 mgr.x (mgr.14150) 131 : cluster [DBG] pgmap v84: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:36.306 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:36 vm03 bash[17524]: audit 2026-03-09T14:19:35.030269+0000 mon.a (mon.0) 380 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T14:19:36.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:36 vm03 bash[17524]: audit 2026-03-09T14:19:35.030269+0000 mon.a (mon.0) 380 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T14:19:36.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:36 vm03 bash[17524]: cluster 2026-03-09T14:19:35.032493+0000 mon.a (mon.0) 381 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-09T14:19:36.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:36 vm03 bash[17524]: cluster 2026-03-09T14:19:35.032493+0000 mon.a (mon.0) 381 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-09T14:19:36.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:36 vm03 bash[17524]: audit 2026-03-09T14:19:35.033197+0000 mon.a (mon.0) 382 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:19:36.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:36 vm03 bash[17524]: audit 2026-03-09T14:19:35.033197+0000 mon.a (mon.0) 382 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:19:36.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:36 vm03 bash[17524]: audit 2026-03-09T14:19:35.043980+0000 mon.a (mon.0) 383 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:19:36.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:36 vm03 bash[17524]: audit 2026-03-09T14:19:35.043980+0000 mon.a (mon.0) 383 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:19:36.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:36 vm03 bash[17524]: audit 2026-03-09T14:19:35.492619+0000 mon.a (mon.0) 384 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:36.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:36 vm03 bash[17524]: audit 2026-03-09T14:19:35.492619+0000 mon.a (mon.0) 384 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:36.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:36 vm03 bash[17524]: audit 2026-03-09T14:19:35.499510+0000 mon.a (mon.0) 385 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:36.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:36 vm03 bash[17524]: audit 2026-03-09T14:19:35.499510+0000 mon.a (mon.0) 385 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:36.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:36 vm03 bash[17524]: audit 2026-03-09T14:19:35.868983+0000 mon.a (mon.0) 386 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:19:36.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:36 vm03 bash[17524]: audit 2026-03-09T14:19:35.868983+0000 mon.a (mon.0) 386 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:19:36.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:36 vm03 bash[17524]: audit 2026-03-09T14:19:35.869865+0000 mon.a (mon.0) 387 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:19:36.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:36 vm03 bash[17524]: audit 2026-03-09T14:19:35.869865+0000 mon.a (mon.0) 387 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:19:36.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:36 vm03 bash[17524]: audit 2026-03-09T14:19:35.875119+0000 mon.a (mon.0) 388 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:36.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:36 vm03 bash[17524]: audit 2026-03-09T14:19:35.875119+0000 mon.a (mon.0) 388 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:36.436 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:36 vm04 bash[19581]: cluster 2026-03-09T14:19:34.921835+0000 mgr.x (mgr.14150) 131 : cluster [DBG] pgmap v84: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:36.436 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:36 vm04 bash[19581]: cluster 2026-03-09T14:19:34.921835+0000 mgr.x (mgr.14150) 131 : cluster [DBG] pgmap v84: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:36.436 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:36 vm04 bash[19581]: audit 2026-03-09T14:19:35.030269+0000 mon.a (mon.0) 380 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T14:19:36.436 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:36 vm04 bash[19581]: audit 2026-03-09T14:19:35.030269+0000 mon.a (mon.0) 380 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T14:19:36.436 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:36 vm04 bash[19581]: cluster 2026-03-09T14:19:35.032493+0000 mon.a (mon.0) 381 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-09T14:19:36.436 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:36 vm04 bash[19581]: cluster 2026-03-09T14:19:35.032493+0000 mon.a (mon.0) 381 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-09T14:19:36.436 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:36 vm04 bash[19581]: audit 2026-03-09T14:19:35.033197+0000 mon.a (mon.0) 382 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:19:36.436 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:36 vm04 bash[19581]: audit 2026-03-09T14:19:35.033197+0000 mon.a (mon.0) 382 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:19:36.436 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:36 vm04 bash[19581]: audit 2026-03-09T14:19:35.043980+0000 mon.a (mon.0) 383 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:19:36.436 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:36 vm04 bash[19581]: audit 2026-03-09T14:19:35.043980+0000 mon.a (mon.0) 383 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:19:36.436 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:36 vm04 bash[19581]: audit 2026-03-09T14:19:35.492619+0000 mon.a (mon.0) 384 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:36.436 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:36 vm04 bash[19581]: audit 2026-03-09T14:19:35.492619+0000 mon.a (mon.0) 384 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:36.436 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:36 vm04 bash[19581]: audit 2026-03-09T14:19:35.499510+0000 mon.a (mon.0) 385 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:36.436 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:36 vm04 bash[19581]: audit 2026-03-09T14:19:35.499510+0000 mon.a (mon.0) 385 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:36.436 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:36 vm04 bash[19581]: audit 2026-03-09T14:19:35.868983+0000 mon.a (mon.0) 386 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:19:36.436 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:36 vm04 bash[19581]: audit 2026-03-09T14:19:35.868983+0000 mon.a (mon.0) 386 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:19:36.436 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:36 vm04 bash[19581]: audit 2026-03-09T14:19:35.869865+0000 mon.a (mon.0) 387 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:19:36.436 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:36 vm04 bash[19581]: audit 2026-03-09T14:19:35.869865+0000 mon.a (mon.0) 387 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:19:36.436 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:36 vm04 bash[19581]: audit 2026-03-09T14:19:35.875119+0000 mon.a (mon.0) 388 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:36.436 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:36 vm04 bash[19581]: audit 2026-03-09T14:19:35.875119+0000 mon.a (mon.0) 388 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:36.490 INFO:teuthology.orchestra.run.vm04.stdout:Created osd(s) 2 on host 'vm04' 2026-03-09T14:19:36.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:36 vm05 bash[20070]: cluster 2026-03-09T14:19:34.921835+0000 mgr.x (mgr.14150) 131 : cluster [DBG] pgmap v84: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:36.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:36 vm05 bash[20070]: cluster 2026-03-09T14:19:34.921835+0000 mgr.x (mgr.14150) 131 : cluster [DBG] pgmap v84: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:36.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:36 vm05 bash[20070]: audit 2026-03-09T14:19:35.030269+0000 mon.a (mon.0) 380 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T14:19:36.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:36 vm05 bash[20070]: audit 2026-03-09T14:19:35.030269+0000 mon.a (mon.0) 380 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T14:19:36.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:36 vm05 bash[20070]: cluster 2026-03-09T14:19:35.032493+0000 mon.a (mon.0) 381 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-09T14:19:36.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:36 vm05 bash[20070]: cluster 2026-03-09T14:19:35.032493+0000 mon.a (mon.0) 381 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-09T14:19:36.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:36 vm05 bash[20070]: audit 2026-03-09T14:19:35.033197+0000 mon.a (mon.0) 382 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:19:36.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:36 vm05 bash[20070]: audit 2026-03-09T14:19:35.033197+0000 mon.a (mon.0) 382 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:19:36.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:36 vm05 bash[20070]: audit 2026-03-09T14:19:35.043980+0000 mon.a (mon.0) 383 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:19:36.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:36 vm05 bash[20070]: audit 2026-03-09T14:19:35.043980+0000 mon.a (mon.0) 383 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:19:36.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:36 vm05 bash[20070]: audit 2026-03-09T14:19:35.492619+0000 mon.a (mon.0) 384 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:36.514 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:36 vm05 bash[20070]: audit 2026-03-09T14:19:35.492619+0000 mon.a (mon.0) 384 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:36.514 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:36 vm05 bash[20070]: audit 2026-03-09T14:19:35.499510+0000 mon.a (mon.0) 385 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:36.514 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:36 vm05 bash[20070]: audit 2026-03-09T14:19:35.499510+0000 mon.a (mon.0) 385 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:36.514 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:36 vm05 bash[20070]: audit 2026-03-09T14:19:35.868983+0000 mon.a (mon.0) 386 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:19:36.514 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:36 vm05 bash[20070]: audit 2026-03-09T14:19:35.868983+0000 mon.a (mon.0) 386 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:19:36.514 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:36 vm05 bash[20070]: audit 2026-03-09T14:19:35.869865+0000 mon.a (mon.0) 387 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:19:36.514 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:36 vm05 bash[20070]: audit 2026-03-09T14:19:35.869865+0000 mon.a (mon.0) 387 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:19:36.514 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:36 vm05 bash[20070]: audit 2026-03-09T14:19:35.875119+0000 mon.a (mon.0) 388 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:36.514 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:36 vm05 bash[20070]: audit 2026-03-09T14:19:35.875119+0000 mon.a (mon.0) 388 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:36.573 DEBUG:teuthology.orchestra.run.vm04:osd.2> sudo journalctl -f -n 0 -u ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@osd.2.service 2026-03-09T14:19:36.574 INFO:tasks.cephadm:Deploying osd.3 on vm04 with /dev/vdd... 2026-03-09T14:19:36.574 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- lvm zap /dev/vdd 2026-03-09T14:19:37.306 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:37 vm03 bash[17524]: cluster 2026-03-09T14:19:34.043283+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:19:37.306 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:37 vm03 bash[17524]: cluster 2026-03-09T14:19:34.043283+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:19:37.306 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:37 vm03 bash[17524]: cluster 2026-03-09T14:19:34.043329+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:19:37.306 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:37 vm03 bash[17524]: cluster 2026-03-09T14:19:34.043329+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:19:37.306 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:37 vm03 bash[17524]: audit 2026-03-09T14:19:36.036759+0000 mon.a (mon.0) 389 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:19:37.306 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:37 vm03 bash[17524]: audit 2026-03-09T14:19:36.036759+0000 mon.a (mon.0) 389 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:19:37.306 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:37 vm03 bash[17524]: cluster 2026-03-09T14:19:36.039717+0000 mon.a (mon.0) 390 : cluster [INF] osd.2 [v2:192.168.123.104:6800/1899064825,v1:192.168.123.104:6801/1899064825] boot 2026-03-09T14:19:37.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:37 vm03 bash[17524]: cluster 2026-03-09T14:19:36.039717+0000 mon.a (mon.0) 390 : cluster [INF] osd.2 [v2:192.168.123.104:6800/1899064825,v1:192.168.123.104:6801/1899064825] boot 2026-03-09T14:19:37.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:37 vm03 bash[17524]: cluster 2026-03-09T14:19:36.039754+0000 mon.a (mon.0) 391 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-09T14:19:37.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:37 vm03 bash[17524]: cluster 2026-03-09T14:19:36.039754+0000 mon.a (mon.0) 391 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-09T14:19:37.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:37 vm03 bash[17524]: audit 2026-03-09T14:19:36.041085+0000 mon.a (mon.0) 392 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:19:37.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:37 vm03 bash[17524]: audit 2026-03-09T14:19:36.041085+0000 mon.a (mon.0) 392 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:19:37.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:37 vm03 bash[17524]: audit 2026-03-09T14:19:36.475679+0000 mon.a (mon.0) 393 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:19:37.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:37 vm03 bash[17524]: audit 2026-03-09T14:19:36.475679+0000 mon.a (mon.0) 393 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:19:37.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:37 vm03 bash[17524]: audit 2026-03-09T14:19:36.481361+0000 mon.a (mon.0) 394 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:37.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:37 vm03 bash[17524]: audit 2026-03-09T14:19:36.481361+0000 mon.a (mon.0) 394 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:37.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:37 vm03 bash[17524]: audit 2026-03-09T14:19:36.485281+0000 mon.a (mon.0) 395 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:37.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:37 vm03 bash[17524]: audit 2026-03-09T14:19:36.485281+0000 mon.a (mon.0) 395 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:37.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:37 vm03 bash[17524]: audit 2026-03-09T14:19:36.970559+0000 mon.a (mon.0) 396 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T14:19:37.307 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:37 vm03 bash[17524]: audit 2026-03-09T14:19:36.970559+0000 mon.a (mon.0) 396 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T14:19:37.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:37 vm04 bash[19581]: cluster 2026-03-09T14:19:34.043283+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:19:37.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:37 vm04 bash[19581]: cluster 2026-03-09T14:19:34.043283+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:19:37.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:37 vm04 bash[19581]: cluster 2026-03-09T14:19:34.043329+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:19:37.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:37 vm04 bash[19581]: cluster 2026-03-09T14:19:34.043329+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:19:37.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:37 vm04 bash[19581]: audit 2026-03-09T14:19:36.036759+0000 mon.a (mon.0) 389 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:19:37.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:37 vm04 bash[19581]: audit 2026-03-09T14:19:36.036759+0000 mon.a (mon.0) 389 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:19:37.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:37 vm04 bash[19581]: cluster 2026-03-09T14:19:36.039717+0000 mon.a (mon.0) 390 : cluster [INF] osd.2 [v2:192.168.123.104:6800/1899064825,v1:192.168.123.104:6801/1899064825] boot 2026-03-09T14:19:37.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:37 vm04 bash[19581]: cluster 2026-03-09T14:19:36.039717+0000 mon.a (mon.0) 390 : cluster [INF] osd.2 [v2:192.168.123.104:6800/1899064825,v1:192.168.123.104:6801/1899064825] boot 2026-03-09T14:19:37.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:37 vm04 bash[19581]: cluster 2026-03-09T14:19:36.039754+0000 mon.a (mon.0) 391 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-09T14:19:37.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:37 vm04 bash[19581]: cluster 2026-03-09T14:19:36.039754+0000 mon.a (mon.0) 391 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-09T14:19:37.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:37 vm04 bash[19581]: audit 2026-03-09T14:19:36.041085+0000 mon.a (mon.0) 392 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:19:37.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:37 vm04 bash[19581]: audit 2026-03-09T14:19:36.041085+0000 mon.a (mon.0) 392 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:19:37.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:37 vm04 bash[19581]: audit 2026-03-09T14:19:36.475679+0000 mon.a (mon.0) 393 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:19:37.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:37 vm04 bash[19581]: audit 2026-03-09T14:19:36.475679+0000 mon.a (mon.0) 393 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:19:37.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:37 vm04 bash[19581]: audit 2026-03-09T14:19:36.481361+0000 mon.a (mon.0) 394 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:37.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:37 vm04 bash[19581]: audit 2026-03-09T14:19:36.481361+0000 mon.a (mon.0) 394 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:37.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:37 vm04 bash[19581]: audit 2026-03-09T14:19:36.485281+0000 mon.a (mon.0) 395 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:37.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:37 vm04 bash[19581]: audit 2026-03-09T14:19:36.485281+0000 mon.a (mon.0) 395 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:37.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:37 vm04 bash[19581]: audit 2026-03-09T14:19:36.970559+0000 mon.a (mon.0) 396 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T14:19:37.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:37 vm04 bash[19581]: audit 2026-03-09T14:19:36.970559+0000 mon.a (mon.0) 396 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T14:19:37.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:37 vm05 bash[20070]: cluster 2026-03-09T14:19:34.043283+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:19:37.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:37 vm05 bash[20070]: cluster 2026-03-09T14:19:34.043283+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:19:37.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:37 vm05 bash[20070]: cluster 2026-03-09T14:19:34.043329+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:19:37.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:37 vm05 bash[20070]: cluster 2026-03-09T14:19:34.043329+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:19:37.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:37 vm05 bash[20070]: audit 2026-03-09T14:19:36.036759+0000 mon.a (mon.0) 389 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:19:37.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:37 vm05 bash[20070]: audit 2026-03-09T14:19:36.036759+0000 mon.a (mon.0) 389 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:19:37.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:37 vm05 bash[20070]: cluster 2026-03-09T14:19:36.039717+0000 mon.a (mon.0) 390 : cluster [INF] osd.2 [v2:192.168.123.104:6800/1899064825,v1:192.168.123.104:6801/1899064825] boot 2026-03-09T14:19:37.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:37 vm05 bash[20070]: cluster 2026-03-09T14:19:36.039717+0000 mon.a (mon.0) 390 : cluster [INF] osd.2 [v2:192.168.123.104:6800/1899064825,v1:192.168.123.104:6801/1899064825] boot 2026-03-09T14:19:37.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:37 vm05 bash[20070]: cluster 2026-03-09T14:19:36.039754+0000 mon.a (mon.0) 391 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-09T14:19:37.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:37 vm05 bash[20070]: cluster 2026-03-09T14:19:36.039754+0000 mon.a (mon.0) 391 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-09T14:19:37.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:37 vm05 bash[20070]: audit 2026-03-09T14:19:36.041085+0000 mon.a (mon.0) 392 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:19:37.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:37 vm05 bash[20070]: audit 2026-03-09T14:19:36.041085+0000 mon.a (mon.0) 392 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:19:37.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:37 vm05 bash[20070]: audit 2026-03-09T14:19:36.475679+0000 mon.a (mon.0) 393 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:19:37.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:37 vm05 bash[20070]: audit 2026-03-09T14:19:36.475679+0000 mon.a (mon.0) 393 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:19:37.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:37 vm05 bash[20070]: audit 2026-03-09T14:19:36.481361+0000 mon.a (mon.0) 394 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:37.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:37 vm05 bash[20070]: audit 2026-03-09T14:19:36.481361+0000 mon.a (mon.0) 394 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:37.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:37 vm05 bash[20070]: audit 2026-03-09T14:19:36.485281+0000 mon.a (mon.0) 395 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:37.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:37 vm05 bash[20070]: audit 2026-03-09T14:19:36.485281+0000 mon.a (mon.0) 395 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:37.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:37 vm05 bash[20070]: audit 2026-03-09T14:19:36.970559+0000 mon.a (mon.0) 396 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T14:19:37.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:37 vm05 bash[20070]: audit 2026-03-09T14:19:36.970559+0000 mon.a (mon.0) 396 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T14:19:38.511 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:38 vm04 bash[19581]: cluster 2026-03-09T14:19:36.922103+0000 mgr.x (mgr.14150) 132 : cluster [DBG] pgmap v87: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:38.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:38 vm04 bash[19581]: cluster 2026-03-09T14:19:36.922103+0000 mgr.x (mgr.14150) 132 : cluster [DBG] pgmap v87: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:38.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:38 vm04 bash[19581]: audit 2026-03-09T14:19:37.053539+0000 mon.a (mon.0) 397 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T14:19:38.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:38 vm04 bash[19581]: audit 2026-03-09T14:19:37.053539+0000 mon.a (mon.0) 397 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T14:19:38.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:38 vm04 bash[19581]: cluster 2026-03-09T14:19:37.055632+0000 mon.a (mon.0) 398 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-09T14:19:38.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:38 vm04 bash[19581]: cluster 2026-03-09T14:19:37.055632+0000 mon.a (mon.0) 398 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-09T14:19:38.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:38 vm04 bash[19581]: audit 2026-03-09T14:19:37.057164+0000 mon.a (mon.0) 399 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T14:19:38.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:38 vm04 bash[19581]: audit 2026-03-09T14:19:37.057164+0000 mon.a (mon.0) 399 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T14:19:38.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:38 vm05 bash[20070]: cluster 2026-03-09T14:19:36.922103+0000 mgr.x (mgr.14150) 132 : cluster [DBG] pgmap v87: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:38.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:38 vm05 bash[20070]: cluster 2026-03-09T14:19:36.922103+0000 mgr.x (mgr.14150) 132 : cluster [DBG] pgmap v87: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:38.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:38 vm05 bash[20070]: audit 2026-03-09T14:19:37.053539+0000 mon.a (mon.0) 397 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T14:19:38.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:38 vm05 bash[20070]: audit 2026-03-09T14:19:37.053539+0000 mon.a (mon.0) 397 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T14:19:38.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:38 vm05 bash[20070]: cluster 2026-03-09T14:19:37.055632+0000 mon.a (mon.0) 398 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-09T14:19:38.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:38 vm05 bash[20070]: cluster 2026-03-09T14:19:37.055632+0000 mon.a (mon.0) 398 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-09T14:19:38.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:38 vm05 bash[20070]: audit 2026-03-09T14:19:37.057164+0000 mon.a (mon.0) 399 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T14:19:38.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:38 vm05 bash[20070]: audit 2026-03-09T14:19:37.057164+0000 mon.a (mon.0) 399 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T14:19:38.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:38 vm03 bash[17524]: cluster 2026-03-09T14:19:36.922103+0000 mgr.x (mgr.14150) 132 : cluster [DBG] pgmap v87: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:38.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:38 vm03 bash[17524]: cluster 2026-03-09T14:19:36.922103+0000 mgr.x (mgr.14150) 132 : cluster [DBG] pgmap v87: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:19:38.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:38 vm03 bash[17524]: audit 2026-03-09T14:19:37.053539+0000 mon.a (mon.0) 397 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T14:19:38.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:38 vm03 bash[17524]: audit 2026-03-09T14:19:37.053539+0000 mon.a (mon.0) 397 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T14:19:38.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:38 vm03 bash[17524]: cluster 2026-03-09T14:19:37.055632+0000 mon.a (mon.0) 398 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-09T14:19:38.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:38 vm03 bash[17524]: cluster 2026-03-09T14:19:37.055632+0000 mon.a (mon.0) 398 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-09T14:19:38.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:38 vm03 bash[17524]: audit 2026-03-09T14:19:37.057164+0000 mon.a (mon.0) 399 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T14:19:38.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:38 vm03 bash[17524]: audit 2026-03-09T14:19:37.057164+0000 mon.a (mon.0) 399 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T14:19:39.511 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:39 vm04 bash[19581]: audit 2026-03-09T14:19:38.072117+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T14:19:39.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:39 vm04 bash[19581]: audit 2026-03-09T14:19:38.072117+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T14:19:39.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:39 vm04 bash[19581]: cluster 2026-03-09T14:19:38.074322+0000 mon.a (mon.0) 401 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-09T14:19:39.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:39 vm04 bash[19581]: cluster 2026-03-09T14:19:38.074322+0000 mon.a (mon.0) 401 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-09T14:19:39.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:39 vm04 bash[19581]: audit 2026-03-09T14:19:39.057663+0000 mon.a (mon.0) 402 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T14:19:39.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:39 vm04 bash[19581]: audit 2026-03-09T14:19:39.057663+0000 mon.a (mon.0) 402 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T14:19:39.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:39 vm04 bash[19581]: audit 2026-03-09T14:19:39.075489+0000 mon.a (mon.0) 403 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T14:19:39.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:39 vm04 bash[19581]: audit 2026-03-09T14:19:39.075489+0000 mon.a (mon.0) 403 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T14:19:39.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:39 vm05 bash[20070]: audit 2026-03-09T14:19:38.072117+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T14:19:39.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:39 vm05 bash[20070]: audit 2026-03-09T14:19:38.072117+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T14:19:39.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:39 vm05 bash[20070]: cluster 2026-03-09T14:19:38.074322+0000 mon.a (mon.0) 401 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-09T14:19:39.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:39 vm05 bash[20070]: cluster 2026-03-09T14:19:38.074322+0000 mon.a (mon.0) 401 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-09T14:19:39.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:39 vm05 bash[20070]: audit 2026-03-09T14:19:39.057663+0000 mon.a (mon.0) 402 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T14:19:39.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:39 vm05 bash[20070]: audit 2026-03-09T14:19:39.057663+0000 mon.a (mon.0) 402 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T14:19:39.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:39 vm05 bash[20070]: audit 2026-03-09T14:19:39.075489+0000 mon.a (mon.0) 403 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T14:19:39.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:39 vm05 bash[20070]: audit 2026-03-09T14:19:39.075489+0000 mon.a (mon.0) 403 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T14:19:39.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:39 vm03 bash[17524]: audit 2026-03-09T14:19:38.072117+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T14:19:39.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:39 vm03 bash[17524]: audit 2026-03-09T14:19:38.072117+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T14:19:39.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:39 vm03 bash[17524]: cluster 2026-03-09T14:19:38.074322+0000 mon.a (mon.0) 401 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-09T14:19:39.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:39 vm03 bash[17524]: cluster 2026-03-09T14:19:38.074322+0000 mon.a (mon.0) 401 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-09T14:19:39.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:39 vm03 bash[17524]: audit 2026-03-09T14:19:39.057663+0000 mon.a (mon.0) 402 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T14:19:39.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:39 vm03 bash[17524]: audit 2026-03-09T14:19:39.057663+0000 mon.a (mon.0) 402 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T14:19:39.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:39 vm03 bash[17524]: audit 2026-03-09T14:19:39.075489+0000 mon.a (mon.0) 403 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T14:19:39.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:39 vm03 bash[17524]: audit 2026-03-09T14:19:39.075489+0000 mon.a (mon.0) 403 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T14:19:40.511 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:40 vm04 bash[19581]: cluster 2026-03-09T14:19:38.922392+0000 mgr.x (mgr.14150) 133 : cluster [DBG] pgmap v90: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:40.511 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:40 vm04 bash[19581]: cluster 2026-03-09T14:19:38.922392+0000 mgr.x (mgr.14150) 133 : cluster [DBG] pgmap v90: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:40.511 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:40 vm04 bash[19581]: audit 2026-03-09T14:19:39.081596+0000 mon.a (mon.0) 404 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:19:40.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:40 vm04 bash[19581]: audit 2026-03-09T14:19:39.081596+0000 mon.a (mon.0) 404 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:19:40.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:40 vm04 bash[19581]: audit 2026-03-09T14:19:39.081718+0000 mon.a (mon.0) 405 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:19:40.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:40 vm04 bash[19581]: audit 2026-03-09T14:19:39.081718+0000 mon.a (mon.0) 405 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:19:40.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:40 vm04 bash[19581]: audit 2026-03-09T14:19:39.081777+0000 mon.a (mon.0) 406 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:19:40.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:40 vm04 bash[19581]: audit 2026-03-09T14:19:39.081777+0000 mon.a (mon.0) 406 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:19:40.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:40 vm04 bash[19581]: audit 2026-03-09T14:19:39.082706+0000 mon.c (mon.1) 6 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T14:19:40.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:40 vm04 bash[19581]: audit 2026-03-09T14:19:39.082706+0000 mon.c (mon.1) 6 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T14:19:40.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:40 vm04 bash[19581]: audit 2026-03-09T14:19:39.085061+0000 mon.a (mon.0) 407 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:19:40.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:40 vm04 bash[19581]: audit 2026-03-09T14:19:39.085061+0000 mon.a (mon.0) 407 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:19:40.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:40 vm04 bash[19581]: audit 2026-03-09T14:19:39.085191+0000 mon.a (mon.0) 408 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:19:40.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:40 vm04 bash[19581]: audit 2026-03-09T14:19:39.085191+0000 mon.a (mon.0) 408 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:19:40.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:40 vm04 bash[19581]: audit 2026-03-09T14:19:39.085562+0000 mon.a (mon.0) 409 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:19:40.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:40 vm04 bash[19581]: audit 2026-03-09T14:19:39.085562+0000 mon.a (mon.0) 409 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:19:40.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:40 vm04 bash[19581]: audit 2026-03-09T14:19:39.100205+0000 mon.c (mon.1) 7 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T14:19:40.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:40 vm04 bash[19581]: audit 2026-03-09T14:19:39.100205+0000 mon.c (mon.1) 7 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T14:19:40.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:40 vm04 bash[19581]: audit 2026-03-09T14:19:39.102652+0000 mon.b (mon.2) 10 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T14:19:40.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:40 vm04 bash[19581]: audit 2026-03-09T14:19:39.102652+0000 mon.b (mon.2) 10 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T14:19:40.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:40 vm04 bash[19581]: audit 2026-03-09T14:19:39.104407+0000 mon.a (mon.0) 410 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:19:40.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:40 vm04 bash[19581]: audit 2026-03-09T14:19:39.104407+0000 mon.a (mon.0) 410 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:19:40.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:40 vm04 bash[19581]: audit 2026-03-09T14:19:39.104506+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:19:40.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:40 vm04 bash[19581]: audit 2026-03-09T14:19:39.104506+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:19:40.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:40 vm04 bash[19581]: audit 2026-03-09T14:19:39.104717+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:19:40.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:40 vm04 bash[19581]: audit 2026-03-09T14:19:39.104717+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:19:40.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:40 vm04 bash[19581]: audit 2026-03-09T14:19:39.122686+0000 mon.b (mon.2) 11 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T14:19:40.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:40 vm04 bash[19581]: audit 2026-03-09T14:19:39.122686+0000 mon.b (mon.2) 11 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T14:19:40.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:40 vm04 bash[19581]: cluster 2026-03-09T14:19:39.135114+0000 mon.a (mon.0) 413 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-09T14:19:40.512 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:40 vm04 bash[19581]: cluster 2026-03-09T14:19:39.135114+0000 mon.a (mon.0) 413 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-09T14:19:40.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:40 vm05 bash[20070]: cluster 2026-03-09T14:19:38.922392+0000 mgr.x (mgr.14150) 133 : cluster [DBG] pgmap v90: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:40.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:40 vm05 bash[20070]: cluster 2026-03-09T14:19:38.922392+0000 mgr.x (mgr.14150) 133 : cluster [DBG] pgmap v90: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:40.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:40 vm05 bash[20070]: audit 2026-03-09T14:19:39.081596+0000 mon.a (mon.0) 404 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:19:40.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:40 vm05 bash[20070]: audit 2026-03-09T14:19:39.081596+0000 mon.a (mon.0) 404 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:19:40.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:40 vm05 bash[20070]: audit 2026-03-09T14:19:39.081718+0000 mon.a (mon.0) 405 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:19:40.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:40 vm05 bash[20070]: audit 2026-03-09T14:19:39.081718+0000 mon.a (mon.0) 405 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:19:40.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:40 vm05 bash[20070]: audit 2026-03-09T14:19:39.081777+0000 mon.a (mon.0) 406 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:19:40.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:40 vm05 bash[20070]: audit 2026-03-09T14:19:39.081777+0000 mon.a (mon.0) 406 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:19:40.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:40 vm05 bash[20070]: audit 2026-03-09T14:19:39.082706+0000 mon.c (mon.1) 6 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T14:19:40.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:40 vm05 bash[20070]: audit 2026-03-09T14:19:39.082706+0000 mon.c (mon.1) 6 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T14:19:40.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:40 vm05 bash[20070]: audit 2026-03-09T14:19:39.085061+0000 mon.a (mon.0) 407 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:19:40.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:40 vm05 bash[20070]: audit 2026-03-09T14:19:39.085061+0000 mon.a (mon.0) 407 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:19:40.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:40 vm05 bash[20070]: audit 2026-03-09T14:19:39.085191+0000 mon.a (mon.0) 408 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:19:40.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:40 vm05 bash[20070]: audit 2026-03-09T14:19:39.085191+0000 mon.a (mon.0) 408 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:19:40.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:40 vm05 bash[20070]: audit 2026-03-09T14:19:39.085562+0000 mon.a (mon.0) 409 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:19:40.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:40 vm05 bash[20070]: audit 2026-03-09T14:19:39.085562+0000 mon.a (mon.0) 409 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:19:40.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:40 vm05 bash[20070]: audit 2026-03-09T14:19:39.100205+0000 mon.c (mon.1) 7 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T14:19:40.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:40 vm05 bash[20070]: audit 2026-03-09T14:19:39.100205+0000 mon.c (mon.1) 7 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T14:19:40.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:40 vm05 bash[20070]: audit 2026-03-09T14:19:39.102652+0000 mon.b (mon.2) 10 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T14:19:40.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:40 vm05 bash[20070]: audit 2026-03-09T14:19:39.102652+0000 mon.b (mon.2) 10 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T14:19:40.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:40 vm05 bash[20070]: audit 2026-03-09T14:19:39.104407+0000 mon.a (mon.0) 410 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:19:40.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:40 vm05 bash[20070]: audit 2026-03-09T14:19:39.104407+0000 mon.a (mon.0) 410 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:19:40.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:40 vm05 bash[20070]: audit 2026-03-09T14:19:39.104506+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:19:40.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:40 vm05 bash[20070]: audit 2026-03-09T14:19:39.104506+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:19:40.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:40 vm05 bash[20070]: audit 2026-03-09T14:19:39.104717+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:19:40.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:40 vm05 bash[20070]: audit 2026-03-09T14:19:39.104717+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:19:40.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:40 vm05 bash[20070]: audit 2026-03-09T14:19:39.122686+0000 mon.b (mon.2) 11 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T14:19:40.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:40 vm05 bash[20070]: audit 2026-03-09T14:19:39.122686+0000 mon.b (mon.2) 11 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T14:19:40.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:40 vm05 bash[20070]: cluster 2026-03-09T14:19:39.135114+0000 mon.a (mon.0) 413 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-09T14:19:40.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:40 vm05 bash[20070]: cluster 2026-03-09T14:19:39.135114+0000 mon.a (mon.0) 413 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-09T14:19:40.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:40 vm03 bash[17524]: cluster 2026-03-09T14:19:38.922392+0000 mgr.x (mgr.14150) 133 : cluster [DBG] pgmap v90: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:40.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:40 vm03 bash[17524]: cluster 2026-03-09T14:19:38.922392+0000 mgr.x (mgr.14150) 133 : cluster [DBG] pgmap v90: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:40.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:40 vm03 bash[17524]: audit 2026-03-09T14:19:39.081596+0000 mon.a (mon.0) 404 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:19:40.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:40 vm03 bash[17524]: audit 2026-03-09T14:19:39.081596+0000 mon.a (mon.0) 404 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:19:40.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:40 vm03 bash[17524]: audit 2026-03-09T14:19:39.081718+0000 mon.a (mon.0) 405 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:19:40.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:40 vm03 bash[17524]: audit 2026-03-09T14:19:39.081718+0000 mon.a (mon.0) 405 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:19:40.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:40 vm03 bash[17524]: audit 2026-03-09T14:19:39.081777+0000 mon.a (mon.0) 406 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:19:40.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:40 vm03 bash[17524]: audit 2026-03-09T14:19:39.081777+0000 mon.a (mon.0) 406 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:19:40.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:40 vm03 bash[17524]: audit 2026-03-09T14:19:39.082706+0000 mon.c (mon.1) 6 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T14:19:40.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:40 vm03 bash[17524]: audit 2026-03-09T14:19:39.082706+0000 mon.c (mon.1) 6 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T14:19:40.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:40 vm03 bash[17524]: audit 2026-03-09T14:19:39.085061+0000 mon.a (mon.0) 407 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:19:40.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:40 vm03 bash[17524]: audit 2026-03-09T14:19:39.085061+0000 mon.a (mon.0) 407 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:19:40.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:40 vm03 bash[17524]: audit 2026-03-09T14:19:39.085191+0000 mon.a (mon.0) 408 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:19:40.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:40 vm03 bash[17524]: audit 2026-03-09T14:19:39.085191+0000 mon.a (mon.0) 408 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:19:40.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:40 vm03 bash[17524]: audit 2026-03-09T14:19:39.085562+0000 mon.a (mon.0) 409 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:19:40.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:40 vm03 bash[17524]: audit 2026-03-09T14:19:39.085562+0000 mon.a (mon.0) 409 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:19:40.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:40 vm03 bash[17524]: audit 2026-03-09T14:19:39.100205+0000 mon.c (mon.1) 7 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T14:19:40.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:40 vm03 bash[17524]: audit 2026-03-09T14:19:39.100205+0000 mon.c (mon.1) 7 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T14:19:40.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:40 vm03 bash[17524]: audit 2026-03-09T14:19:39.102652+0000 mon.b (mon.2) 10 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T14:19:40.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:40 vm03 bash[17524]: audit 2026-03-09T14:19:39.102652+0000 mon.b (mon.2) 10 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T14:19:40.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:40 vm03 bash[17524]: audit 2026-03-09T14:19:39.104407+0000 mon.a (mon.0) 410 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:19:40.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:40 vm03 bash[17524]: audit 2026-03-09T14:19:39.104407+0000 mon.a (mon.0) 410 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:19:40.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:40 vm03 bash[17524]: audit 2026-03-09T14:19:39.104506+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:19:40.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:40 vm03 bash[17524]: audit 2026-03-09T14:19:39.104506+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:19:40.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:40 vm03 bash[17524]: audit 2026-03-09T14:19:39.104717+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:19:40.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:40 vm03 bash[17524]: audit 2026-03-09T14:19:39.104717+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:19:40.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:40 vm03 bash[17524]: audit 2026-03-09T14:19:39.122686+0000 mon.b (mon.2) 11 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T14:19:40.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:40 vm03 bash[17524]: audit 2026-03-09T14:19:39.122686+0000 mon.b (mon.2) 11 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T14:19:40.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:40 vm03 bash[17524]: cluster 2026-03-09T14:19:39.135114+0000 mon.a (mon.0) 413 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-09T14:19:40.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:40 vm03 bash[17524]: cluster 2026-03-09T14:19:39.135114+0000 mon.a (mon.0) 413 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-09T14:19:41.228 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.b/config 2026-03-09T14:19:42.244 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:42 vm04 bash[19581]: cluster 2026-03-09T14:19:40.922622+0000 mgr.x (mgr.14150) 134 : cluster [DBG] pgmap v92: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:42.244 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:42 vm04 bash[19581]: cluster 2026-03-09T14:19:40.922622+0000 mgr.x (mgr.14150) 134 : cluster [DBG] pgmap v92: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:42.244 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:42 vm04 bash[19581]: cluster 2026-03-09T14:19:41.127671+0000 mon.a (mon.0) 414 : cluster [DBG] mgrmap e13: x(active, since 2m) 2026-03-09T14:19:42.244 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:42 vm04 bash[19581]: cluster 2026-03-09T14:19:41.127671+0000 mon.a (mon.0) 414 : cluster [DBG] mgrmap e13: x(active, since 2m) 2026-03-09T14:19:42.244 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:42 vm04 bash[19581]: audit 2026-03-09T14:19:41.990539+0000 mon.a (mon.0) 415 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:42.244 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:42 vm04 bash[19581]: audit 2026-03-09T14:19:41.990539+0000 mon.a (mon.0) 415 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:42.244 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:42 vm04 bash[19581]: audit 2026-03-09T14:19:41.994119+0000 mon.a (mon.0) 416 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:42.244 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:42 vm04 bash[19581]: audit 2026-03-09T14:19:41.994119+0000 mon.a (mon.0) 416 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:42.244 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:42 vm04 bash[19581]: audit 2026-03-09T14:19:41.994787+0000 mon.a (mon.0) 417 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:19:42.244 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:42 vm04 bash[19581]: audit 2026-03-09T14:19:41.994787+0000 mon.a (mon.0) 417 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:19:42.244 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:42 vm04 bash[19581]: audit 2026-03-09T14:19:41.997409+0000 mon.a (mon.0) 418 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:42.245 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:42 vm04 bash[19581]: audit 2026-03-09T14:19:41.997409+0000 mon.a (mon.0) 418 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:42.245 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:42 vm04 bash[19581]: audit 2026-03-09T14:19:41.998660+0000 mon.a (mon.0) 419 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:19:42.245 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:42 vm04 bash[19581]: audit 2026-03-09T14:19:41.998660+0000 mon.a (mon.0) 419 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:19:42.245 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:42 vm04 bash[19581]: audit 2026-03-09T14:19:41.999039+0000 mon.a (mon.0) 420 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:19:42.245 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:42 vm04 bash[19581]: audit 2026-03-09T14:19:41.999039+0000 mon.a (mon.0) 420 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:19:42.245 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:42 vm04 bash[19581]: audit 2026-03-09T14:19:42.001965+0000 mon.a (mon.0) 421 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:42.245 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:42 vm04 bash[19581]: audit 2026-03-09T14:19:42.001965+0000 mon.a (mon.0) 421 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:42.512 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:42 vm05 bash[20070]: cluster 2026-03-09T14:19:40.922622+0000 mgr.x (mgr.14150) 134 : cluster [DBG] pgmap v92: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:42.512 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:42 vm05 bash[20070]: cluster 2026-03-09T14:19:40.922622+0000 mgr.x (mgr.14150) 134 : cluster [DBG] pgmap v92: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:42.512 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:42 vm05 bash[20070]: cluster 2026-03-09T14:19:41.127671+0000 mon.a (mon.0) 414 : cluster [DBG] mgrmap e13: x(active, since 2m) 2026-03-09T14:19:42.512 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:42 vm05 bash[20070]: cluster 2026-03-09T14:19:41.127671+0000 mon.a (mon.0) 414 : cluster [DBG] mgrmap e13: x(active, since 2m) 2026-03-09T14:19:42.512 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:42 vm05 bash[20070]: audit 2026-03-09T14:19:41.990539+0000 mon.a (mon.0) 415 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:42.512 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:42 vm05 bash[20070]: audit 2026-03-09T14:19:41.990539+0000 mon.a (mon.0) 415 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:42.512 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:42 vm05 bash[20070]: audit 2026-03-09T14:19:41.994119+0000 mon.a (mon.0) 416 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:42.512 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:42 vm05 bash[20070]: audit 2026-03-09T14:19:41.994119+0000 mon.a (mon.0) 416 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:42.512 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:42 vm05 bash[20070]: audit 2026-03-09T14:19:41.994787+0000 mon.a (mon.0) 417 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:19:42.512 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:42 vm05 bash[20070]: audit 2026-03-09T14:19:41.994787+0000 mon.a (mon.0) 417 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:19:42.512 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:42 vm05 bash[20070]: audit 2026-03-09T14:19:41.997409+0000 mon.a (mon.0) 418 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:42.512 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:42 vm05 bash[20070]: audit 2026-03-09T14:19:41.997409+0000 mon.a (mon.0) 418 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:42.512 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:42 vm05 bash[20070]: audit 2026-03-09T14:19:41.998660+0000 mon.a (mon.0) 419 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:19:42.512 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:42 vm05 bash[20070]: audit 2026-03-09T14:19:41.998660+0000 mon.a (mon.0) 419 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:19:42.512 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:42 vm05 bash[20070]: audit 2026-03-09T14:19:41.999039+0000 mon.a (mon.0) 420 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:19:42.512 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:42 vm05 bash[20070]: audit 2026-03-09T14:19:41.999039+0000 mon.a (mon.0) 420 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:19:42.512 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:42 vm05 bash[20070]: audit 2026-03-09T14:19:42.001965+0000 mon.a (mon.0) 421 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:42.513 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:42 vm05 bash[20070]: audit 2026-03-09T14:19:42.001965+0000 mon.a (mon.0) 421 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:42.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:42 vm03 bash[17524]: cluster 2026-03-09T14:19:40.922622+0000 mgr.x (mgr.14150) 134 : cluster [DBG] pgmap v92: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:42.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:42 vm03 bash[17524]: cluster 2026-03-09T14:19:40.922622+0000 mgr.x (mgr.14150) 134 : cluster [DBG] pgmap v92: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:42.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:42 vm03 bash[17524]: cluster 2026-03-09T14:19:41.127671+0000 mon.a (mon.0) 414 : cluster [DBG] mgrmap e13: x(active, since 2m) 2026-03-09T14:19:42.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:42 vm03 bash[17524]: cluster 2026-03-09T14:19:41.127671+0000 mon.a (mon.0) 414 : cluster [DBG] mgrmap e13: x(active, since 2m) 2026-03-09T14:19:42.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:42 vm03 bash[17524]: audit 2026-03-09T14:19:41.990539+0000 mon.a (mon.0) 415 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:42.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:42 vm03 bash[17524]: audit 2026-03-09T14:19:41.990539+0000 mon.a (mon.0) 415 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:42.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:42 vm03 bash[17524]: audit 2026-03-09T14:19:41.994119+0000 mon.a (mon.0) 416 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:42.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:42 vm03 bash[17524]: audit 2026-03-09T14:19:41.994119+0000 mon.a (mon.0) 416 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:42.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:42 vm03 bash[17524]: audit 2026-03-09T14:19:41.994787+0000 mon.a (mon.0) 417 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:19:42.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:42 vm03 bash[17524]: audit 2026-03-09T14:19:41.994787+0000 mon.a (mon.0) 417 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:19:42.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:42 vm03 bash[17524]: audit 2026-03-09T14:19:41.997409+0000 mon.a (mon.0) 418 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:42.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:42 vm03 bash[17524]: audit 2026-03-09T14:19:41.997409+0000 mon.a (mon.0) 418 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:42.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:42 vm03 bash[17524]: audit 2026-03-09T14:19:41.998660+0000 mon.a (mon.0) 419 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:19:42.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:42 vm03 bash[17524]: audit 2026-03-09T14:19:41.998660+0000 mon.a (mon.0) 419 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:19:42.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:42 vm03 bash[17524]: audit 2026-03-09T14:19:41.999039+0000 mon.a (mon.0) 420 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:19:42.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:42 vm03 bash[17524]: audit 2026-03-09T14:19:41.999039+0000 mon.a (mon.0) 420 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:19:42.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:42 vm03 bash[17524]: audit 2026-03-09T14:19:42.001965+0000 mon.a (mon.0) 421 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:42.556 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:42 vm03 bash[17524]: audit 2026-03-09T14:19:42.001965+0000 mon.a (mon.0) 421 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:19:42.709 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:19:42.721 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph orch daemon add osd vm04:/dev/vdd 2026-03-09T14:19:43.511 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:43 vm04 bash[19581]: cephadm 2026-03-09T14:19:41.985712+0000 mgr.x (mgr.14150) 135 : cephadm [INF] Detected new or changed devices on vm04 2026-03-09T14:19:43.511 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:43 vm04 bash[19581]: cephadm 2026-03-09T14:19:41.985712+0000 mgr.x (mgr.14150) 135 : cephadm [INF] Detected new or changed devices on vm04 2026-03-09T14:19:43.511 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:43 vm04 bash[19581]: cephadm 2026-03-09T14:19:41.995129+0000 mgr.x (mgr.14150) 136 : cephadm [INF] Adjusting osd_memory_target on vm04 to 4551M 2026-03-09T14:19:43.511 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:43 vm04 bash[19581]: cephadm 2026-03-09T14:19:41.995129+0000 mgr.x (mgr.14150) 136 : cephadm [INF] Adjusting osd_memory_target on vm04 to 4551M 2026-03-09T14:19:43.512 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:43 vm05 bash[20070]: cephadm 2026-03-09T14:19:41.985712+0000 mgr.x (mgr.14150) 135 : cephadm [INF] Detected new or changed devices on vm04 2026-03-09T14:19:43.512 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:43 vm05 bash[20070]: cephadm 2026-03-09T14:19:41.985712+0000 mgr.x (mgr.14150) 135 : cephadm [INF] Detected new or changed devices on vm04 2026-03-09T14:19:43.512 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:43 vm05 bash[20070]: cephadm 2026-03-09T14:19:41.995129+0000 mgr.x (mgr.14150) 136 : cephadm [INF] Adjusting osd_memory_target on vm04 to 4551M 2026-03-09T14:19:43.512 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:43 vm05 bash[20070]: cephadm 2026-03-09T14:19:41.995129+0000 mgr.x (mgr.14150) 136 : cephadm [INF] Adjusting osd_memory_target on vm04 to 4551M 2026-03-09T14:19:43.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:43 vm03 bash[17524]: cephadm 2026-03-09T14:19:41.985712+0000 mgr.x (mgr.14150) 135 : cephadm [INF] Detected new or changed devices on vm04 2026-03-09T14:19:43.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:43 vm03 bash[17524]: cephadm 2026-03-09T14:19:41.985712+0000 mgr.x (mgr.14150) 135 : cephadm [INF] Detected new or changed devices on vm04 2026-03-09T14:19:43.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:43 vm03 bash[17524]: cephadm 2026-03-09T14:19:41.995129+0000 mgr.x (mgr.14150) 136 : cephadm [INF] Adjusting osd_memory_target on vm04 to 4551M 2026-03-09T14:19:43.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:43 vm03 bash[17524]: cephadm 2026-03-09T14:19:41.995129+0000 mgr.x (mgr.14150) 136 : cephadm [INF] Adjusting osd_memory_target on vm04 to 4551M 2026-03-09T14:19:44.511 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:44 vm04 bash[19581]: cluster 2026-03-09T14:19:42.922868+0000 mgr.x (mgr.14150) 137 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:44.511 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:44 vm04 bash[19581]: cluster 2026-03-09T14:19:42.922868+0000 mgr.x (mgr.14150) 137 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:44.512 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:44 vm05 bash[20070]: cluster 2026-03-09T14:19:42.922868+0000 mgr.x (mgr.14150) 137 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:44.512 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:44 vm05 bash[20070]: cluster 2026-03-09T14:19:42.922868+0000 mgr.x (mgr.14150) 137 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:44.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:44 vm03 bash[17524]: cluster 2026-03-09T14:19:42.922868+0000 mgr.x (mgr.14150) 137 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:44.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:44 vm03 bash[17524]: cluster 2026-03-09T14:19:42.922868+0000 mgr.x (mgr.14150) 137 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:46.510 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:46 vm04 bash[19581]: cluster 2026-03-09T14:19:44.923101+0000 mgr.x (mgr.14150) 138 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:46.511 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:46 vm04 bash[19581]: cluster 2026-03-09T14:19:44.923101+0000 mgr.x (mgr.14150) 138 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:46.511 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:46 vm05 bash[20070]: cluster 2026-03-09T14:19:44.923101+0000 mgr.x (mgr.14150) 138 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:46.512 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:46 vm05 bash[20070]: cluster 2026-03-09T14:19:44.923101+0000 mgr.x (mgr.14150) 138 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:46.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:46 vm03 bash[17524]: cluster 2026-03-09T14:19:44.923101+0000 mgr.x (mgr.14150) 138 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:46.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:46 vm03 bash[17524]: cluster 2026-03-09T14:19:44.923101+0000 mgr.x (mgr.14150) 138 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:47.323 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.b/config 2026-03-09T14:19:48.510 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:48 vm04 bash[19581]: cluster 2026-03-09T14:19:46.923367+0000 mgr.x (mgr.14150) 139 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:48.510 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:48 vm04 bash[19581]: cluster 2026-03-09T14:19:46.923367+0000 mgr.x (mgr.14150) 139 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:48.510 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:48 vm04 bash[19581]: audit 2026-03-09T14:19:47.567697+0000 mon.a (mon.0) 422 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:19:48.510 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:48 vm04 bash[19581]: audit 2026-03-09T14:19:47.567697+0000 mon.a (mon.0) 422 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:19:48.510 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:48 vm04 bash[19581]: audit 2026-03-09T14:19:47.568973+0000 mon.a (mon.0) 423 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:19:48.510 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:48 vm04 bash[19581]: audit 2026-03-09T14:19:47.568973+0000 mon.a (mon.0) 423 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:19:48.510 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:48 vm04 bash[19581]: audit 2026-03-09T14:19:47.569443+0000 mon.a (mon.0) 424 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:19:48.511 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:48 vm04 bash[19581]: audit 2026-03-09T14:19:47.569443+0000 mon.a (mon.0) 424 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:19:48.511 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:48 vm05 bash[20070]: cluster 2026-03-09T14:19:46.923367+0000 mgr.x (mgr.14150) 139 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:48.511 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:48 vm05 bash[20070]: cluster 2026-03-09T14:19:46.923367+0000 mgr.x (mgr.14150) 139 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:48.511 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:48 vm05 bash[20070]: audit 2026-03-09T14:19:47.567697+0000 mon.a (mon.0) 422 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:19:48.511 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:48 vm05 bash[20070]: audit 2026-03-09T14:19:47.567697+0000 mon.a (mon.0) 422 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:19:48.511 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:48 vm05 bash[20070]: audit 2026-03-09T14:19:47.568973+0000 mon.a (mon.0) 423 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:19:48.511 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:48 vm05 bash[20070]: audit 2026-03-09T14:19:47.568973+0000 mon.a (mon.0) 423 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:19:48.511 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:48 vm05 bash[20070]: audit 2026-03-09T14:19:47.569443+0000 mon.a (mon.0) 424 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:19:48.511 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:48 vm05 bash[20070]: audit 2026-03-09T14:19:47.569443+0000 mon.a (mon.0) 424 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:19:48.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:48 vm03 bash[17524]: cluster 2026-03-09T14:19:46.923367+0000 mgr.x (mgr.14150) 139 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:48.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:48 vm03 bash[17524]: cluster 2026-03-09T14:19:46.923367+0000 mgr.x (mgr.14150) 139 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:48.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:48 vm03 bash[17524]: audit 2026-03-09T14:19:47.567697+0000 mon.a (mon.0) 422 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:19:48.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:48 vm03 bash[17524]: audit 2026-03-09T14:19:47.567697+0000 mon.a (mon.0) 422 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:19:48.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:48 vm03 bash[17524]: audit 2026-03-09T14:19:47.568973+0000 mon.a (mon.0) 423 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:19:48.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:48 vm03 bash[17524]: audit 2026-03-09T14:19:47.568973+0000 mon.a (mon.0) 423 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:19:48.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:48 vm03 bash[17524]: audit 2026-03-09T14:19:47.569443+0000 mon.a (mon.0) 424 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:19:48.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:48 vm03 bash[17524]: audit 2026-03-09T14:19:47.569443+0000 mon.a (mon.0) 424 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:19:49.510 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:49 vm04 bash[19581]: audit 2026-03-09T14:19:47.566568+0000 mgr.x (mgr.14150) 140 : audit [DBG] from='client.24175 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:19:49.510 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:49 vm04 bash[19581]: audit 2026-03-09T14:19:47.566568+0000 mgr.x (mgr.14150) 140 : audit [DBG] from='client.24175 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:19:49.511 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:49 vm05 bash[20070]: audit 2026-03-09T14:19:47.566568+0000 mgr.x (mgr.14150) 140 : audit [DBG] from='client.24175 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:19:49.511 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:49 vm05 bash[20070]: audit 2026-03-09T14:19:47.566568+0000 mgr.x (mgr.14150) 140 : audit [DBG] from='client.24175 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:19:49.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:49 vm03 bash[17524]: audit 2026-03-09T14:19:47.566568+0000 mgr.x (mgr.14150) 140 : audit [DBG] from='client.24175 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:19:49.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:49 vm03 bash[17524]: audit 2026-03-09T14:19:47.566568+0000 mgr.x (mgr.14150) 140 : audit [DBG] from='client.24175 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:19:50.510 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:50 vm04 bash[19581]: cluster 2026-03-09T14:19:48.923581+0000 mgr.x (mgr.14150) 141 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:50.510 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:50 vm04 bash[19581]: cluster 2026-03-09T14:19:48.923581+0000 mgr.x (mgr.14150) 141 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:50.511 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:50 vm05 bash[20070]: cluster 2026-03-09T14:19:48.923581+0000 mgr.x (mgr.14150) 141 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:50.511 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:50 vm05 bash[20070]: cluster 2026-03-09T14:19:48.923581+0000 mgr.x (mgr.14150) 141 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:50.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:50 vm03 bash[17524]: cluster 2026-03-09T14:19:48.923581+0000 mgr.x (mgr.14150) 141 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:50.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:50 vm03 bash[17524]: cluster 2026-03-09T14:19:48.923581+0000 mgr.x (mgr.14150) 141 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:52.352 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:52 vm04 bash[19581]: cluster 2026-03-09T14:19:50.923805+0000 mgr.x (mgr.14150) 142 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:52.352 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:52 vm04 bash[19581]: cluster 2026-03-09T14:19:50.923805+0000 mgr.x (mgr.14150) 142 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:52.352 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:52 vm04 bash[19581]: audit 2026-03-09T14:19:51.951492+0000 mon.b (mon.2) 12 : audit [INF] from='client.? 192.168.123.104:0/1727102419' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d1d9774a-a921-4ff4-9d67-c8545864b268"}]: dispatch 2026-03-09T14:19:52.352 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:52 vm04 bash[19581]: audit 2026-03-09T14:19:51.951492+0000 mon.b (mon.2) 12 : audit [INF] from='client.? 192.168.123.104:0/1727102419' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d1d9774a-a921-4ff4-9d67-c8545864b268"}]: dispatch 2026-03-09T14:19:52.352 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:52 vm04 bash[19581]: audit 2026-03-09T14:19:51.953180+0000 mon.a (mon.0) 425 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d1d9774a-a921-4ff4-9d67-c8545864b268"}]: dispatch 2026-03-09T14:19:52.352 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:52 vm04 bash[19581]: audit 2026-03-09T14:19:51.953180+0000 mon.a (mon.0) 425 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d1d9774a-a921-4ff4-9d67-c8545864b268"}]: dispatch 2026-03-09T14:19:52.352 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:52 vm04 bash[19581]: audit 2026-03-09T14:19:51.956315+0000 mon.a (mon.0) 426 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d1d9774a-a921-4ff4-9d67-c8545864b268"}]': finished 2026-03-09T14:19:52.352 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:52 vm04 bash[19581]: audit 2026-03-09T14:19:51.956315+0000 mon.a (mon.0) 426 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d1d9774a-a921-4ff4-9d67-c8545864b268"}]': finished 2026-03-09T14:19:52.352 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:52 vm04 bash[19581]: cluster 2026-03-09T14:19:51.958869+0000 mon.a (mon.0) 427 : cluster [DBG] osdmap e22: 4 total, 3 up, 4 in 2026-03-09T14:19:52.352 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:52 vm04 bash[19581]: cluster 2026-03-09T14:19:51.958869+0000 mon.a (mon.0) 427 : cluster [DBG] osdmap e22: 4 total, 3 up, 4 in 2026-03-09T14:19:52.352 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:52 vm04 bash[19581]: audit 2026-03-09T14:19:51.958993+0000 mon.a (mon.0) 428 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:19:52.352 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:52 vm04 bash[19581]: audit 2026-03-09T14:19:51.958993+0000 mon.a (mon.0) 428 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:19:52.511 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:52 vm05 bash[20070]: cluster 2026-03-09T14:19:50.923805+0000 mgr.x (mgr.14150) 142 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:52.511 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:52 vm05 bash[20070]: cluster 2026-03-09T14:19:50.923805+0000 mgr.x (mgr.14150) 142 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:52.511 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:52 vm05 bash[20070]: audit 2026-03-09T14:19:51.951492+0000 mon.b (mon.2) 12 : audit [INF] from='client.? 192.168.123.104:0/1727102419' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d1d9774a-a921-4ff4-9d67-c8545864b268"}]: dispatch 2026-03-09T14:19:52.511 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:52 vm05 bash[20070]: audit 2026-03-09T14:19:51.951492+0000 mon.b (mon.2) 12 : audit [INF] from='client.? 192.168.123.104:0/1727102419' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d1d9774a-a921-4ff4-9d67-c8545864b268"}]: dispatch 2026-03-09T14:19:52.511 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:52 vm05 bash[20070]: audit 2026-03-09T14:19:51.953180+0000 mon.a (mon.0) 425 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d1d9774a-a921-4ff4-9d67-c8545864b268"}]: dispatch 2026-03-09T14:19:52.511 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:52 vm05 bash[20070]: audit 2026-03-09T14:19:51.953180+0000 mon.a (mon.0) 425 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d1d9774a-a921-4ff4-9d67-c8545864b268"}]: dispatch 2026-03-09T14:19:52.511 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:52 vm05 bash[20070]: audit 2026-03-09T14:19:51.956315+0000 mon.a (mon.0) 426 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d1d9774a-a921-4ff4-9d67-c8545864b268"}]': finished 2026-03-09T14:19:52.511 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:52 vm05 bash[20070]: audit 2026-03-09T14:19:51.956315+0000 mon.a (mon.0) 426 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d1d9774a-a921-4ff4-9d67-c8545864b268"}]': finished 2026-03-09T14:19:52.511 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:52 vm05 bash[20070]: cluster 2026-03-09T14:19:51.958869+0000 mon.a (mon.0) 427 : cluster [DBG] osdmap e22: 4 total, 3 up, 4 in 2026-03-09T14:19:52.511 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:52 vm05 bash[20070]: cluster 2026-03-09T14:19:51.958869+0000 mon.a (mon.0) 427 : cluster [DBG] osdmap e22: 4 total, 3 up, 4 in 2026-03-09T14:19:52.511 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:52 vm05 bash[20070]: audit 2026-03-09T14:19:51.958993+0000 mon.a (mon.0) 428 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:19:52.511 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:52 vm05 bash[20070]: audit 2026-03-09T14:19:51.958993+0000 mon.a (mon.0) 428 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:19:52.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:52 vm03 bash[17524]: cluster 2026-03-09T14:19:50.923805+0000 mgr.x (mgr.14150) 142 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:52.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:52 vm03 bash[17524]: cluster 2026-03-09T14:19:50.923805+0000 mgr.x (mgr.14150) 142 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:52.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:52 vm03 bash[17524]: audit 2026-03-09T14:19:51.951492+0000 mon.b (mon.2) 12 : audit [INF] from='client.? 192.168.123.104:0/1727102419' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d1d9774a-a921-4ff4-9d67-c8545864b268"}]: dispatch 2026-03-09T14:19:52.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:52 vm03 bash[17524]: audit 2026-03-09T14:19:51.951492+0000 mon.b (mon.2) 12 : audit [INF] from='client.? 192.168.123.104:0/1727102419' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d1d9774a-a921-4ff4-9d67-c8545864b268"}]: dispatch 2026-03-09T14:19:52.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:52 vm03 bash[17524]: audit 2026-03-09T14:19:51.953180+0000 mon.a (mon.0) 425 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d1d9774a-a921-4ff4-9d67-c8545864b268"}]: dispatch 2026-03-09T14:19:52.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:52 vm03 bash[17524]: audit 2026-03-09T14:19:51.953180+0000 mon.a (mon.0) 425 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d1d9774a-a921-4ff4-9d67-c8545864b268"}]: dispatch 2026-03-09T14:19:52.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:52 vm03 bash[17524]: audit 2026-03-09T14:19:51.956315+0000 mon.a (mon.0) 426 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d1d9774a-a921-4ff4-9d67-c8545864b268"}]': finished 2026-03-09T14:19:52.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:52 vm03 bash[17524]: audit 2026-03-09T14:19:51.956315+0000 mon.a (mon.0) 426 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d1d9774a-a921-4ff4-9d67-c8545864b268"}]': finished 2026-03-09T14:19:52.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:52 vm03 bash[17524]: cluster 2026-03-09T14:19:51.958869+0000 mon.a (mon.0) 427 : cluster [DBG] osdmap e22: 4 total, 3 up, 4 in 2026-03-09T14:19:52.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:52 vm03 bash[17524]: cluster 2026-03-09T14:19:51.958869+0000 mon.a (mon.0) 427 : cluster [DBG] osdmap e22: 4 total, 3 up, 4 in 2026-03-09T14:19:52.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:52 vm03 bash[17524]: audit 2026-03-09T14:19:51.958993+0000 mon.a (mon.0) 428 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:19:52.555 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:52 vm03 bash[17524]: audit 2026-03-09T14:19:51.958993+0000 mon.a (mon.0) 428 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:19:53.510 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:53 vm04 bash[19581]: audit 2026-03-09T14:19:52.528424+0000 mon.a (mon.0) 429 : audit [DBG] from='client.? 192.168.123.104:0/3006744235' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:19:53.510 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:53 vm04 bash[19581]: audit 2026-03-09T14:19:52.528424+0000 mon.a (mon.0) 429 : audit [DBG] from='client.? 192.168.123.104:0/3006744235' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:19:53.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:53 vm05 bash[20070]: audit 2026-03-09T14:19:52.528424+0000 mon.a (mon.0) 429 : audit [DBG] from='client.? 192.168.123.104:0/3006744235' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:19:53.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:53 vm05 bash[20070]: audit 2026-03-09T14:19:52.528424+0000 mon.a (mon.0) 429 : audit [DBG] from='client.? 192.168.123.104:0/3006744235' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:19:53.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:53 vm03 bash[17524]: audit 2026-03-09T14:19:52.528424+0000 mon.a (mon.0) 429 : audit [DBG] from='client.? 192.168.123.104:0/3006744235' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:19:53.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:53 vm03 bash[17524]: audit 2026-03-09T14:19:52.528424+0000 mon.a (mon.0) 429 : audit [DBG] from='client.? 192.168.123.104:0/3006744235' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:19:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:54 vm04 bash[19581]: cluster 2026-03-09T14:19:52.924050+0000 mgr.x (mgr.14150) 143 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:54 vm04 bash[19581]: cluster 2026-03-09T14:19:52.924050+0000 mgr.x (mgr.14150) 143 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:54 vm05 bash[20070]: cluster 2026-03-09T14:19:52.924050+0000 mgr.x (mgr.14150) 143 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:54 vm05 bash[20070]: cluster 2026-03-09T14:19:52.924050+0000 mgr.x (mgr.14150) 143 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:54.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:54 vm03 bash[17524]: cluster 2026-03-09T14:19:52.924050+0000 mgr.x (mgr.14150) 143 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:54.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:54 vm03 bash[17524]: cluster 2026-03-09T14:19:52.924050+0000 mgr.x (mgr.14150) 143 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:56.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:56 vm04 bash[19581]: cluster 2026-03-09T14:19:54.924272+0000 mgr.x (mgr.14150) 144 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:56.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:56 vm04 bash[19581]: cluster 2026-03-09T14:19:54.924272+0000 mgr.x (mgr.14150) 144 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:56.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:56 vm05 bash[20070]: cluster 2026-03-09T14:19:54.924272+0000 mgr.x (mgr.14150) 144 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:56.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:56 vm05 bash[20070]: cluster 2026-03-09T14:19:54.924272+0000 mgr.x (mgr.14150) 144 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:56.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:56 vm03 bash[17524]: cluster 2026-03-09T14:19:54.924272+0000 mgr.x (mgr.14150) 144 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:56.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:56 vm03 bash[17524]: cluster 2026-03-09T14:19:54.924272+0000 mgr.x (mgr.14150) 144 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:58.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:58 vm04 bash[19581]: cluster 2026-03-09T14:19:56.924506+0000 mgr.x (mgr.14150) 145 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:58.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:19:58 vm04 bash[19581]: cluster 2026-03-09T14:19:56.924506+0000 mgr.x (mgr.14150) 145 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:58.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:58 vm05 bash[20070]: cluster 2026-03-09T14:19:56.924506+0000 mgr.x (mgr.14150) 145 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:58.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:19:58 vm05 bash[20070]: cluster 2026-03-09T14:19:56.924506+0000 mgr.x (mgr.14150) 145 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:58.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:58 vm03 bash[17524]: cluster 2026-03-09T14:19:56.924506+0000 mgr.x (mgr.14150) 145 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:19:58.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:19:58 vm03 bash[17524]: cluster 2026-03-09T14:19:56.924506+0000 mgr.x (mgr.14150) 145 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:20:00.493 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:00 vm04 bash[19581]: cluster 2026-03-09T14:19:58.924803+0000 mgr.x (mgr.14150) 146 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:20:00.493 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:00 vm04 bash[19581]: cluster 2026-03-09T14:19:58.924803+0000 mgr.x (mgr.14150) 146 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:20:00.493 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:00 vm04 bash[19581]: cluster 2026-03-09T14:20:00.000140+0000 mon.a (mon.0) 430 : cluster [INF] overall HEALTH_OK 2026-03-09T14:20:00.493 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:00 vm04 bash[19581]: cluster 2026-03-09T14:20:00.000140+0000 mon.a (mon.0) 430 : cluster [INF] overall HEALTH_OK 2026-03-09T14:20:00.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:00 vm05 bash[20070]: cluster 2026-03-09T14:19:58.924803+0000 mgr.x (mgr.14150) 146 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:20:00.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:00 vm05 bash[20070]: cluster 2026-03-09T14:19:58.924803+0000 mgr.x (mgr.14150) 146 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:20:00.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:00 vm05 bash[20070]: cluster 2026-03-09T14:20:00.000140+0000 mon.a (mon.0) 430 : cluster [INF] overall HEALTH_OK 2026-03-09T14:20:00.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:00 vm05 bash[20070]: cluster 2026-03-09T14:20:00.000140+0000 mon.a (mon.0) 430 : cluster [INF] overall HEALTH_OK 2026-03-09T14:20:00.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:00 vm03 bash[17524]: cluster 2026-03-09T14:19:58.924803+0000 mgr.x (mgr.14150) 146 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:20:00.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:00 vm03 bash[17524]: cluster 2026-03-09T14:19:58.924803+0000 mgr.x (mgr.14150) 146 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:20:00.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:00 vm03 bash[17524]: cluster 2026-03-09T14:20:00.000140+0000 mon.a (mon.0) 430 : cluster [INF] overall HEALTH_OK 2026-03-09T14:20:00.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:00 vm03 bash[17524]: cluster 2026-03-09T14:20:00.000140+0000 mon.a (mon.0) 430 : cluster [INF] overall HEALTH_OK 2026-03-09T14:20:01.428 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:01 vm04 bash[19581]: audit 2026-03-09T14:20:00.898159+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T14:20:01.428 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:01 vm04 bash[19581]: audit 2026-03-09T14:20:00.898159+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T14:20:01.428 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:01 vm04 bash[19581]: audit 2026-03-09T14:20:00.898733+0000 mon.a (mon.0) 432 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:01.428 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:01 vm04 bash[19581]: audit 2026-03-09T14:20:00.898733+0000 mon.a (mon.0) 432 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:01.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:01 vm05 bash[20070]: audit 2026-03-09T14:20:00.898159+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T14:20:01.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:01 vm05 bash[20070]: audit 2026-03-09T14:20:00.898159+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T14:20:01.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:01 vm05 bash[20070]: audit 2026-03-09T14:20:00.898733+0000 mon.a (mon.0) 432 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:01.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:01 vm05 bash[20070]: audit 2026-03-09T14:20:00.898733+0000 mon.a (mon.0) 432 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:01.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:01 vm03 bash[17524]: audit 2026-03-09T14:20:00.898159+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T14:20:01.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:01 vm03 bash[17524]: audit 2026-03-09T14:20:00.898159+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T14:20:01.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:01 vm03 bash[17524]: audit 2026-03-09T14:20:00.898733+0000 mon.a (mon.0) 432 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:01.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:01 vm03 bash[17524]: audit 2026-03-09T14:20:00.898733+0000 mon.a (mon.0) 432 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:01.759 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:01 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:20:01.759 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:20:01 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:20:02.171 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:01 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:20:02.171 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:20:01 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:20:02.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:02 vm04 bash[19581]: cephadm 2026-03-09T14:20:00.899208+0000 mgr.x (mgr.14150) 147 : cephadm [INF] Deploying daemon osd.3 on vm04 2026-03-09T14:20:02.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:02 vm04 bash[19581]: cephadm 2026-03-09T14:20:00.899208+0000 mgr.x (mgr.14150) 147 : cephadm [INF] Deploying daemon osd.3 on vm04 2026-03-09T14:20:02.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:02 vm04 bash[19581]: cluster 2026-03-09T14:20:00.925009+0000 mgr.x (mgr.14150) 148 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:20:02.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:02 vm04 bash[19581]: cluster 2026-03-09T14:20:00.925009+0000 mgr.x (mgr.14150) 148 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:20:02.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:02 vm04 bash[19581]: audit 2026-03-09T14:20:01.921869+0000 mon.a (mon.0) 433 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:20:02.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:02 vm04 bash[19581]: audit 2026-03-09T14:20:01.921869+0000 mon.a (mon.0) 433 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:20:02.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:02 vm04 bash[19581]: audit 2026-03-09T14:20:01.927183+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:02.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:02 vm04 bash[19581]: audit 2026-03-09T14:20:01.927183+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:02.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:02 vm04 bash[19581]: audit 2026-03-09T14:20:01.931282+0000 mon.a (mon.0) 435 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:02.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:02 vm04 bash[19581]: audit 2026-03-09T14:20:01.931282+0000 mon.a (mon.0) 435 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:02.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:02 vm05 bash[20070]: cephadm 2026-03-09T14:20:00.899208+0000 mgr.x (mgr.14150) 147 : cephadm [INF] Deploying daemon osd.3 on vm04 2026-03-09T14:20:02.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:02 vm05 bash[20070]: cephadm 2026-03-09T14:20:00.899208+0000 mgr.x (mgr.14150) 147 : cephadm [INF] Deploying daemon osd.3 on vm04 2026-03-09T14:20:02.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:02 vm05 bash[20070]: cluster 2026-03-09T14:20:00.925009+0000 mgr.x (mgr.14150) 148 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:20:02.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:02 vm05 bash[20070]: cluster 2026-03-09T14:20:00.925009+0000 mgr.x (mgr.14150) 148 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:20:02.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:02 vm05 bash[20070]: audit 2026-03-09T14:20:01.921869+0000 mon.a (mon.0) 433 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:20:02.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:02 vm05 bash[20070]: audit 2026-03-09T14:20:01.921869+0000 mon.a (mon.0) 433 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:20:02.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:02 vm05 bash[20070]: audit 2026-03-09T14:20:01.927183+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:02.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:02 vm05 bash[20070]: audit 2026-03-09T14:20:01.927183+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:02.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:02 vm05 bash[20070]: audit 2026-03-09T14:20:01.931282+0000 mon.a (mon.0) 435 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:02.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:02 vm05 bash[20070]: audit 2026-03-09T14:20:01.931282+0000 mon.a (mon.0) 435 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:02.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:02 vm03 bash[17524]: cephadm 2026-03-09T14:20:00.899208+0000 mgr.x (mgr.14150) 147 : cephadm [INF] Deploying daemon osd.3 on vm04 2026-03-09T14:20:02.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:02 vm03 bash[17524]: cephadm 2026-03-09T14:20:00.899208+0000 mgr.x (mgr.14150) 147 : cephadm [INF] Deploying daemon osd.3 on vm04 2026-03-09T14:20:02.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:02 vm03 bash[17524]: cluster 2026-03-09T14:20:00.925009+0000 mgr.x (mgr.14150) 148 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:20:02.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:02 vm03 bash[17524]: cluster 2026-03-09T14:20:00.925009+0000 mgr.x (mgr.14150) 148 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:20:02.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:02 vm03 bash[17524]: audit 2026-03-09T14:20:01.921869+0000 mon.a (mon.0) 433 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:20:02.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:02 vm03 bash[17524]: audit 2026-03-09T14:20:01.921869+0000 mon.a (mon.0) 433 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:20:02.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:02 vm03 bash[17524]: audit 2026-03-09T14:20:01.927183+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:02.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:02 vm03 bash[17524]: audit 2026-03-09T14:20:01.927183+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:02.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:02 vm03 bash[17524]: audit 2026-03-09T14:20:01.931282+0000 mon.a (mon.0) 435 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:02.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:02 vm03 bash[17524]: audit 2026-03-09T14:20:01.931282+0000 mon.a (mon.0) 435 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:04.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:04 vm04 bash[19581]: cluster 2026-03-09T14:20:02.925207+0000 mgr.x (mgr.14150) 149 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:20:04.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:04 vm04 bash[19581]: cluster 2026-03-09T14:20:02.925207+0000 mgr.x (mgr.14150) 149 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:20:04.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:04 vm05 bash[20070]: cluster 2026-03-09T14:20:02.925207+0000 mgr.x (mgr.14150) 149 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:20:04.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:04 vm05 bash[20070]: cluster 2026-03-09T14:20:02.925207+0000 mgr.x (mgr.14150) 149 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:20:04.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:04 vm03 bash[17524]: cluster 2026-03-09T14:20:02.925207+0000 mgr.x (mgr.14150) 149 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:20:04.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:04 vm03 bash[17524]: cluster 2026-03-09T14:20:02.925207+0000 mgr.x (mgr.14150) 149 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:20:06.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:06 vm04 bash[19581]: cluster 2026-03-09T14:20:04.925402+0000 mgr.x (mgr.14150) 150 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:20:06.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:06 vm04 bash[19581]: cluster 2026-03-09T14:20:04.925402+0000 mgr.x (mgr.14150) 150 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:20:06.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:06 vm04 bash[19581]: audit 2026-03-09T14:20:05.372149+0000 mon.b (mon.2) 13 : audit [INF] from='osd.3 [v2:192.168.123.104:6808/1600567220,v1:192.168.123.104:6809/1600567220]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T14:20:06.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:06 vm04 bash[19581]: audit 2026-03-09T14:20:05.372149+0000 mon.b (mon.2) 13 : audit [INF] from='osd.3 [v2:192.168.123.104:6808/1600567220,v1:192.168.123.104:6809/1600567220]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T14:20:06.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:06 vm04 bash[19581]: audit 2026-03-09T14:20:05.373499+0000 mon.a (mon.0) 436 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T14:20:06.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:06 vm04 bash[19581]: audit 2026-03-09T14:20:05.373499+0000 mon.a (mon.0) 436 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T14:20:06.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:06 vm05 bash[20070]: cluster 2026-03-09T14:20:04.925402+0000 mgr.x (mgr.14150) 150 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:20:06.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:06 vm05 bash[20070]: cluster 2026-03-09T14:20:04.925402+0000 mgr.x (mgr.14150) 150 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:20:06.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:06 vm05 bash[20070]: audit 2026-03-09T14:20:05.372149+0000 mon.b (mon.2) 13 : audit [INF] from='osd.3 [v2:192.168.123.104:6808/1600567220,v1:192.168.123.104:6809/1600567220]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T14:20:06.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:06 vm05 bash[20070]: audit 2026-03-09T14:20:05.372149+0000 mon.b (mon.2) 13 : audit [INF] from='osd.3 [v2:192.168.123.104:6808/1600567220,v1:192.168.123.104:6809/1600567220]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T14:20:06.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:06 vm05 bash[20070]: audit 2026-03-09T14:20:05.373499+0000 mon.a (mon.0) 436 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T14:20:06.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:06 vm05 bash[20070]: audit 2026-03-09T14:20:05.373499+0000 mon.a (mon.0) 436 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T14:20:06.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:06 vm03 bash[17524]: cluster 2026-03-09T14:20:04.925402+0000 mgr.x (mgr.14150) 150 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:20:06.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:06 vm03 bash[17524]: cluster 2026-03-09T14:20:04.925402+0000 mgr.x (mgr.14150) 150 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:20:06.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:06 vm03 bash[17524]: audit 2026-03-09T14:20:05.372149+0000 mon.b (mon.2) 13 : audit [INF] from='osd.3 [v2:192.168.123.104:6808/1600567220,v1:192.168.123.104:6809/1600567220]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T14:20:06.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:06 vm03 bash[17524]: audit 2026-03-09T14:20:05.372149+0000 mon.b (mon.2) 13 : audit [INF] from='osd.3 [v2:192.168.123.104:6808/1600567220,v1:192.168.123.104:6809/1600567220]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T14:20:06.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:06 vm03 bash[17524]: audit 2026-03-09T14:20:05.373499+0000 mon.a (mon.0) 436 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T14:20:06.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:06 vm03 bash[17524]: audit 2026-03-09T14:20:05.373499+0000 mon.a (mon.0) 436 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T14:20:07.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:07 vm04 bash[19581]: audit 2026-03-09T14:20:06.231582+0000 mon.a (mon.0) 437 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T14:20:07.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:07 vm04 bash[19581]: audit 2026-03-09T14:20:06.231582+0000 mon.a (mon.0) 437 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T14:20:07.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:07 vm04 bash[19581]: audit 2026-03-09T14:20:06.234081+0000 mon.b (mon.2) 14 : audit [INF] from='osd.3 [v2:192.168.123.104:6808/1600567220,v1:192.168.123.104:6809/1600567220]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:20:07.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:07 vm04 bash[19581]: audit 2026-03-09T14:20:06.234081+0000 mon.b (mon.2) 14 : audit [INF] from='osd.3 [v2:192.168.123.104:6808/1600567220,v1:192.168.123.104:6809/1600567220]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:20:07.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:07 vm04 bash[19581]: cluster 2026-03-09T14:20:06.234195+0000 mon.a (mon.0) 438 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-09T14:20:07.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:07 vm04 bash[19581]: cluster 2026-03-09T14:20:06.234195+0000 mon.a (mon.0) 438 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-09T14:20:07.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:07 vm04 bash[19581]: audit 2026-03-09T14:20:06.234402+0000 mon.a (mon.0) 439 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:20:07.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:07 vm04 bash[19581]: audit 2026-03-09T14:20:06.234402+0000 mon.a (mon.0) 439 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:20:07.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:07 vm04 bash[19581]: audit 2026-03-09T14:20:06.235268+0000 mon.a (mon.0) 440 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:20:07.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:07 vm04 bash[19581]: audit 2026-03-09T14:20:06.235268+0000 mon.a (mon.0) 440 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:20:07.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:07 vm05 bash[20070]: audit 2026-03-09T14:20:06.231582+0000 mon.a (mon.0) 437 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T14:20:07.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:07 vm05 bash[20070]: audit 2026-03-09T14:20:06.231582+0000 mon.a (mon.0) 437 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T14:20:07.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:07 vm05 bash[20070]: audit 2026-03-09T14:20:06.234081+0000 mon.b (mon.2) 14 : audit [INF] from='osd.3 [v2:192.168.123.104:6808/1600567220,v1:192.168.123.104:6809/1600567220]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:20:07.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:07 vm05 bash[20070]: audit 2026-03-09T14:20:06.234081+0000 mon.b (mon.2) 14 : audit [INF] from='osd.3 [v2:192.168.123.104:6808/1600567220,v1:192.168.123.104:6809/1600567220]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:20:07.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:07 vm05 bash[20070]: cluster 2026-03-09T14:20:06.234195+0000 mon.a (mon.0) 438 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-09T14:20:07.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:07 vm05 bash[20070]: cluster 2026-03-09T14:20:06.234195+0000 mon.a (mon.0) 438 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-09T14:20:07.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:07 vm05 bash[20070]: audit 2026-03-09T14:20:06.234402+0000 mon.a (mon.0) 439 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:20:07.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:07 vm05 bash[20070]: audit 2026-03-09T14:20:06.234402+0000 mon.a (mon.0) 439 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:20:07.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:07 vm05 bash[20070]: audit 2026-03-09T14:20:06.235268+0000 mon.a (mon.0) 440 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:20:07.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:07 vm05 bash[20070]: audit 2026-03-09T14:20:06.235268+0000 mon.a (mon.0) 440 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:20:07.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:07 vm03 bash[17524]: audit 2026-03-09T14:20:06.231582+0000 mon.a (mon.0) 437 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T14:20:07.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:07 vm03 bash[17524]: audit 2026-03-09T14:20:06.231582+0000 mon.a (mon.0) 437 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T14:20:07.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:07 vm03 bash[17524]: audit 2026-03-09T14:20:06.234081+0000 mon.b (mon.2) 14 : audit [INF] from='osd.3 [v2:192.168.123.104:6808/1600567220,v1:192.168.123.104:6809/1600567220]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:20:07.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:07 vm03 bash[17524]: audit 2026-03-09T14:20:06.234081+0000 mon.b (mon.2) 14 : audit [INF] from='osd.3 [v2:192.168.123.104:6808/1600567220,v1:192.168.123.104:6809/1600567220]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:20:07.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:07 vm03 bash[17524]: cluster 2026-03-09T14:20:06.234195+0000 mon.a (mon.0) 438 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-09T14:20:07.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:07 vm03 bash[17524]: cluster 2026-03-09T14:20:06.234195+0000 mon.a (mon.0) 438 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-09T14:20:07.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:07 vm03 bash[17524]: audit 2026-03-09T14:20:06.234402+0000 mon.a (mon.0) 439 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:20:07.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:07 vm03 bash[17524]: audit 2026-03-09T14:20:06.234402+0000 mon.a (mon.0) 439 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:20:07.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:07 vm03 bash[17524]: audit 2026-03-09T14:20:06.235268+0000 mon.a (mon.0) 440 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:20:07.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:07 vm03 bash[17524]: audit 2026-03-09T14:20:06.235268+0000 mon.a (mon.0) 440 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:20:08.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:08 vm04 bash[19581]: cluster 2026-03-09T14:20:06.925609+0000 mgr.x (mgr.14150) 151 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:20:08.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:08 vm04 bash[19581]: cluster 2026-03-09T14:20:06.925609+0000 mgr.x (mgr.14150) 151 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:20:08.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:08 vm04 bash[19581]: audit 2026-03-09T14:20:07.238966+0000 mon.a (mon.0) 441 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T14:20:08.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:08 vm04 bash[19581]: audit 2026-03-09T14:20:07.238966+0000 mon.a (mon.0) 441 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T14:20:08.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:08 vm04 bash[19581]: cluster 2026-03-09T14:20:07.253832+0000 mon.a (mon.0) 442 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-09T14:20:08.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:08 vm04 bash[19581]: cluster 2026-03-09T14:20:07.253832+0000 mon.a (mon.0) 442 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-09T14:20:08.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:08 vm04 bash[19581]: audit 2026-03-09T14:20:07.254241+0000 mon.a (mon.0) 443 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:20:08.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:08 vm04 bash[19581]: audit 2026-03-09T14:20:07.254241+0000 mon.a (mon.0) 443 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:20:08.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:08 vm04 bash[19581]: audit 2026-03-09T14:20:08.018957+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:08.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:08 vm04 bash[19581]: audit 2026-03-09T14:20:08.018957+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:08.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:08 vm04 bash[19581]: audit 2026-03-09T14:20:08.023077+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:08.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:08 vm04 bash[19581]: audit 2026-03-09T14:20:08.023077+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:08.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:08 vm04 bash[19581]: audit 2026-03-09T14:20:08.023906+0000 mon.a (mon.0) 446 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:08.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:08 vm04 bash[19581]: audit 2026-03-09T14:20:08.023906+0000 mon.a (mon.0) 446 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:08.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:08 vm04 bash[19581]: audit 2026-03-09T14:20:08.024523+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:20:08.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:08 vm04 bash[19581]: audit 2026-03-09T14:20:08.024523+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:20:08.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:08 vm04 bash[19581]: audit 2026-03-09T14:20:08.028203+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:08.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:08 vm04 bash[19581]: audit 2026-03-09T14:20:08.028203+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:08.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:08 vm05 bash[20070]: cluster 2026-03-09T14:20:06.925609+0000 mgr.x (mgr.14150) 151 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:20:08.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:08 vm05 bash[20070]: cluster 2026-03-09T14:20:06.925609+0000 mgr.x (mgr.14150) 151 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:20:08.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:08 vm05 bash[20070]: audit 2026-03-09T14:20:07.238966+0000 mon.a (mon.0) 441 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T14:20:08.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:08 vm05 bash[20070]: audit 2026-03-09T14:20:07.238966+0000 mon.a (mon.0) 441 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T14:20:08.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:08 vm05 bash[20070]: cluster 2026-03-09T14:20:07.253832+0000 mon.a (mon.0) 442 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-09T14:20:08.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:08 vm05 bash[20070]: cluster 2026-03-09T14:20:07.253832+0000 mon.a (mon.0) 442 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-09T14:20:08.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:08 vm05 bash[20070]: audit 2026-03-09T14:20:07.254241+0000 mon.a (mon.0) 443 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:20:08.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:08 vm05 bash[20070]: audit 2026-03-09T14:20:07.254241+0000 mon.a (mon.0) 443 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:20:08.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:08 vm05 bash[20070]: audit 2026-03-09T14:20:08.018957+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:08.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:08 vm05 bash[20070]: audit 2026-03-09T14:20:08.018957+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:08.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:08 vm05 bash[20070]: audit 2026-03-09T14:20:08.023077+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:08.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:08 vm05 bash[20070]: audit 2026-03-09T14:20:08.023077+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:08.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:08 vm05 bash[20070]: audit 2026-03-09T14:20:08.023906+0000 mon.a (mon.0) 446 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:08.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:08 vm05 bash[20070]: audit 2026-03-09T14:20:08.023906+0000 mon.a (mon.0) 446 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:08.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:08 vm05 bash[20070]: audit 2026-03-09T14:20:08.024523+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:20:08.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:08 vm05 bash[20070]: audit 2026-03-09T14:20:08.024523+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:20:08.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:08 vm05 bash[20070]: audit 2026-03-09T14:20:08.028203+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:08.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:08 vm05 bash[20070]: audit 2026-03-09T14:20:08.028203+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:08.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:08 vm03 bash[17524]: cluster 2026-03-09T14:20:06.925609+0000 mgr.x (mgr.14150) 151 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:20:08.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:08 vm03 bash[17524]: cluster 2026-03-09T14:20:06.925609+0000 mgr.x (mgr.14150) 151 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:20:08.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:08 vm03 bash[17524]: audit 2026-03-09T14:20:07.238966+0000 mon.a (mon.0) 441 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T14:20:08.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:08 vm03 bash[17524]: audit 2026-03-09T14:20:07.238966+0000 mon.a (mon.0) 441 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T14:20:08.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:08 vm03 bash[17524]: cluster 2026-03-09T14:20:07.253832+0000 mon.a (mon.0) 442 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-09T14:20:08.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:08 vm03 bash[17524]: cluster 2026-03-09T14:20:07.253832+0000 mon.a (mon.0) 442 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-09T14:20:08.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:08 vm03 bash[17524]: audit 2026-03-09T14:20:07.254241+0000 mon.a (mon.0) 443 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:20:08.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:08 vm03 bash[17524]: audit 2026-03-09T14:20:07.254241+0000 mon.a (mon.0) 443 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:20:08.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:08 vm03 bash[17524]: audit 2026-03-09T14:20:08.018957+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:08.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:08 vm03 bash[17524]: audit 2026-03-09T14:20:08.018957+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:08.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:08 vm03 bash[17524]: audit 2026-03-09T14:20:08.023077+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:08.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:08 vm03 bash[17524]: audit 2026-03-09T14:20:08.023077+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:08.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:08 vm03 bash[17524]: audit 2026-03-09T14:20:08.023906+0000 mon.a (mon.0) 446 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:08.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:08 vm03 bash[17524]: audit 2026-03-09T14:20:08.023906+0000 mon.a (mon.0) 446 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:08.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:08 vm03 bash[17524]: audit 2026-03-09T14:20:08.024523+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:20:08.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:08 vm03 bash[17524]: audit 2026-03-09T14:20:08.024523+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:20:08.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:08 vm03 bash[17524]: audit 2026-03-09T14:20:08.028203+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:08.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:08 vm03 bash[17524]: audit 2026-03-09T14:20:08.028203+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:08.949 INFO:teuthology.orchestra.run.vm04.stdout:Created osd(s) 3 on host 'vm04' 2026-03-09T14:20:09.011 DEBUG:teuthology.orchestra.run.vm04:osd.3> sudo journalctl -f -n 0 -u ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@osd.3.service 2026-03-09T14:20:09.012 INFO:tasks.cephadm:Deploying osd.4 on vm04 with /dev/vdc... 2026-03-09T14:20:09.012 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- lvm zap /dev/vdc 2026-03-09T14:20:09.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:09 vm04 bash[19581]: cluster 2026-03-09T14:20:06.342589+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:20:09.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:09 vm04 bash[19581]: cluster 2026-03-09T14:20:06.342589+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:20:09.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:09 vm04 bash[19581]: cluster 2026-03-09T14:20:06.342627+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:20:09.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:09 vm04 bash[19581]: cluster 2026-03-09T14:20:06.342627+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:20:09.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:09 vm04 bash[19581]: cluster 2026-03-09T14:20:08.246941+0000 mon.a (mon.0) 449 : cluster [INF] osd.3 [v2:192.168.123.104:6808/1600567220,v1:192.168.123.104:6809/1600567220] boot 2026-03-09T14:20:09.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:09 vm04 bash[19581]: cluster 2026-03-09T14:20:08.246941+0000 mon.a (mon.0) 449 : cluster [INF] osd.3 [v2:192.168.123.104:6808/1600567220,v1:192.168.123.104:6809/1600567220] boot 2026-03-09T14:20:09.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:09 vm04 bash[19581]: cluster 2026-03-09T14:20:08.247051+0000 mon.a (mon.0) 450 : cluster [DBG] osdmap e25: 4 total, 4 up, 4 in 2026-03-09T14:20:09.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:09 vm04 bash[19581]: cluster 2026-03-09T14:20:08.247051+0000 mon.a (mon.0) 450 : cluster [DBG] osdmap e25: 4 total, 4 up, 4 in 2026-03-09T14:20:09.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:09 vm04 bash[19581]: audit 2026-03-09T14:20:08.248722+0000 mon.a (mon.0) 451 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:20:09.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:09 vm04 bash[19581]: audit 2026-03-09T14:20:08.248722+0000 mon.a (mon.0) 451 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:20:09.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:09 vm04 bash[19581]: audit 2026-03-09T14:20:08.934072+0000 mon.a (mon.0) 452 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:20:09.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:09 vm04 bash[19581]: audit 2026-03-09T14:20:08.934072+0000 mon.a (mon.0) 452 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:20:09.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:09 vm04 bash[19581]: audit 2026-03-09T14:20:08.941829+0000 mon.a (mon.0) 453 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:09.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:09 vm04 bash[19581]: audit 2026-03-09T14:20:08.941829+0000 mon.a (mon.0) 453 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:09.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:09 vm04 bash[19581]: audit 2026-03-09T14:20:08.945654+0000 mon.a (mon.0) 454 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:09.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:09 vm04 bash[19581]: audit 2026-03-09T14:20:08.945654+0000 mon.a (mon.0) 454 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:09.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:09 vm04 bash[19581]: cluster 2026-03-09T14:20:09.245297+0000 mon.a (mon.0) 455 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-09T14:20:09.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:09 vm04 bash[19581]: cluster 2026-03-09T14:20:09.245297+0000 mon.a (mon.0) 455 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-09T14:20:09.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:09 vm05 bash[20070]: cluster 2026-03-09T14:20:06.342589+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:20:09.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:09 vm05 bash[20070]: cluster 2026-03-09T14:20:06.342589+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:20:09.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:09 vm05 bash[20070]: cluster 2026-03-09T14:20:06.342627+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:20:09.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:09 vm05 bash[20070]: cluster 2026-03-09T14:20:06.342627+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:20:09.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:09 vm05 bash[20070]: cluster 2026-03-09T14:20:08.246941+0000 mon.a (mon.0) 449 : cluster [INF] osd.3 [v2:192.168.123.104:6808/1600567220,v1:192.168.123.104:6809/1600567220] boot 2026-03-09T14:20:09.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:09 vm05 bash[20070]: cluster 2026-03-09T14:20:08.246941+0000 mon.a (mon.0) 449 : cluster [INF] osd.3 [v2:192.168.123.104:6808/1600567220,v1:192.168.123.104:6809/1600567220] boot 2026-03-09T14:20:09.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:09 vm05 bash[20070]: cluster 2026-03-09T14:20:08.247051+0000 mon.a (mon.0) 450 : cluster [DBG] osdmap e25: 4 total, 4 up, 4 in 2026-03-09T14:20:09.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:09 vm05 bash[20070]: cluster 2026-03-09T14:20:08.247051+0000 mon.a (mon.0) 450 : cluster [DBG] osdmap e25: 4 total, 4 up, 4 in 2026-03-09T14:20:09.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:09 vm05 bash[20070]: audit 2026-03-09T14:20:08.248722+0000 mon.a (mon.0) 451 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:20:09.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:09 vm05 bash[20070]: audit 2026-03-09T14:20:08.248722+0000 mon.a (mon.0) 451 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:20:09.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:09 vm05 bash[20070]: audit 2026-03-09T14:20:08.934072+0000 mon.a (mon.0) 452 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:20:09.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:09 vm05 bash[20070]: audit 2026-03-09T14:20:08.934072+0000 mon.a (mon.0) 452 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:20:09.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:09 vm05 bash[20070]: audit 2026-03-09T14:20:08.941829+0000 mon.a (mon.0) 453 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:09.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:09 vm05 bash[20070]: audit 2026-03-09T14:20:08.941829+0000 mon.a (mon.0) 453 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:09.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:09 vm05 bash[20070]: audit 2026-03-09T14:20:08.945654+0000 mon.a (mon.0) 454 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:09.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:09 vm05 bash[20070]: audit 2026-03-09T14:20:08.945654+0000 mon.a (mon.0) 454 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:09.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:09 vm05 bash[20070]: cluster 2026-03-09T14:20:09.245297+0000 mon.a (mon.0) 455 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-09T14:20:09.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:09 vm05 bash[20070]: cluster 2026-03-09T14:20:09.245297+0000 mon.a (mon.0) 455 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-09T14:20:09.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:09 vm03 bash[17524]: cluster 2026-03-09T14:20:06.342589+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:20:09.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:09 vm03 bash[17524]: cluster 2026-03-09T14:20:06.342589+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:20:09.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:09 vm03 bash[17524]: cluster 2026-03-09T14:20:06.342627+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:20:09.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:09 vm03 bash[17524]: cluster 2026-03-09T14:20:06.342627+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:20:09.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:09 vm03 bash[17524]: cluster 2026-03-09T14:20:08.246941+0000 mon.a (mon.0) 449 : cluster [INF] osd.3 [v2:192.168.123.104:6808/1600567220,v1:192.168.123.104:6809/1600567220] boot 2026-03-09T14:20:09.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:09 vm03 bash[17524]: cluster 2026-03-09T14:20:08.246941+0000 mon.a (mon.0) 449 : cluster [INF] osd.3 [v2:192.168.123.104:6808/1600567220,v1:192.168.123.104:6809/1600567220] boot 2026-03-09T14:20:09.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:09 vm03 bash[17524]: cluster 2026-03-09T14:20:08.247051+0000 mon.a (mon.0) 450 : cluster [DBG] osdmap e25: 4 total, 4 up, 4 in 2026-03-09T14:20:09.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:09 vm03 bash[17524]: cluster 2026-03-09T14:20:08.247051+0000 mon.a (mon.0) 450 : cluster [DBG] osdmap e25: 4 total, 4 up, 4 in 2026-03-09T14:20:09.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:09 vm03 bash[17524]: audit 2026-03-09T14:20:08.248722+0000 mon.a (mon.0) 451 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:20:09.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:09 vm03 bash[17524]: audit 2026-03-09T14:20:08.248722+0000 mon.a (mon.0) 451 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:20:09.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:09 vm03 bash[17524]: audit 2026-03-09T14:20:08.934072+0000 mon.a (mon.0) 452 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:20:09.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:09 vm03 bash[17524]: audit 2026-03-09T14:20:08.934072+0000 mon.a (mon.0) 452 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:20:09.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:09 vm03 bash[17524]: audit 2026-03-09T14:20:08.941829+0000 mon.a (mon.0) 453 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:09.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:09 vm03 bash[17524]: audit 2026-03-09T14:20:08.941829+0000 mon.a (mon.0) 453 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:09.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:09 vm03 bash[17524]: audit 2026-03-09T14:20:08.945654+0000 mon.a (mon.0) 454 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:09.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:09 vm03 bash[17524]: audit 2026-03-09T14:20:08.945654+0000 mon.a (mon.0) 454 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:09.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:09 vm03 bash[17524]: cluster 2026-03-09T14:20:09.245297+0000 mon.a (mon.0) 455 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-09T14:20:09.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:09 vm03 bash[17524]: cluster 2026-03-09T14:20:09.245297+0000 mon.a (mon.0) 455 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-09T14:20:10.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:10 vm03 bash[17524]: cluster 2026-03-09T14:20:08.925838+0000 mgr.x (mgr.14150) 152 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:20:10.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:10 vm03 bash[17524]: cluster 2026-03-09T14:20:08.925838+0000 mgr.x (mgr.14150) 152 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:20:10.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:10 vm03 bash[17524]: cluster 2026-03-09T14:20:10.247484+0000 mon.a (mon.0) 456 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-09T14:20:10.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:10 vm03 bash[17524]: cluster 2026-03-09T14:20:10.247484+0000 mon.a (mon.0) 456 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-09T14:20:10.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:10 vm04 bash[19581]: cluster 2026-03-09T14:20:08.925838+0000 mgr.x (mgr.14150) 152 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:20:10.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:10 vm04 bash[19581]: cluster 2026-03-09T14:20:08.925838+0000 mgr.x (mgr.14150) 152 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:20:10.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:10 vm04 bash[19581]: cluster 2026-03-09T14:20:10.247484+0000 mon.a (mon.0) 456 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-09T14:20:10.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:10 vm04 bash[19581]: cluster 2026-03-09T14:20:10.247484+0000 mon.a (mon.0) 456 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-09T14:20:10.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:10 vm05 bash[20070]: cluster 2026-03-09T14:20:08.925838+0000 mgr.x (mgr.14150) 152 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:20:10.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:10 vm05 bash[20070]: cluster 2026-03-09T14:20:08.925838+0000 mgr.x (mgr.14150) 152 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:20:10.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:10 vm05 bash[20070]: cluster 2026-03-09T14:20:10.247484+0000 mon.a (mon.0) 456 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-09T14:20:10.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:10 vm05 bash[20070]: cluster 2026-03-09T14:20:10.247484+0000 mon.a (mon.0) 456 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-09T14:20:12.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:12 vm03 bash[17524]: cluster 2026-03-09T14:20:10.926096+0000 mgr.x (mgr.14150) 153 : cluster [DBG] pgmap v113: 1 pgs: 1 peering; 0 B data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:12.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:12 vm03 bash[17524]: cluster 2026-03-09T14:20:10.926096+0000 mgr.x (mgr.14150) 153 : cluster [DBG] pgmap v113: 1 pgs: 1 peering; 0 B data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:12.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:12 vm04 bash[19581]: cluster 2026-03-09T14:20:10.926096+0000 mgr.x (mgr.14150) 153 : cluster [DBG] pgmap v113: 1 pgs: 1 peering; 0 B data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:12.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:12 vm04 bash[19581]: cluster 2026-03-09T14:20:10.926096+0000 mgr.x (mgr.14150) 153 : cluster [DBG] pgmap v113: 1 pgs: 1 peering; 0 B data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:12.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:12 vm05 bash[20070]: cluster 2026-03-09T14:20:10.926096+0000 mgr.x (mgr.14150) 153 : cluster [DBG] pgmap v113: 1 pgs: 1 peering; 0 B data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:12.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:12 vm05 bash[20070]: cluster 2026-03-09T14:20:10.926096+0000 mgr.x (mgr.14150) 153 : cluster [DBG] pgmap v113: 1 pgs: 1 peering; 0 B data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:13.670 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.b/config 2026-03-09T14:20:14.531 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:14 vm04 bash[19581]: cluster 2026-03-09T14:20:12.926416+0000 mgr.x (mgr.14150) 154 : cluster [DBG] pgmap v114: 1 pgs: 1 peering; 0 B data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:14.531 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:14 vm04 bash[19581]: cluster 2026-03-09T14:20:12.926416+0000 mgr.x (mgr.14150) 154 : cluster [DBG] pgmap v114: 1 pgs: 1 peering; 0 B data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:14.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:14 vm03 bash[17524]: cluster 2026-03-09T14:20:12.926416+0000 mgr.x (mgr.14150) 154 : cluster [DBG] pgmap v114: 1 pgs: 1 peering; 0 B data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:14.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:14 vm03 bash[17524]: cluster 2026-03-09T14:20:12.926416+0000 mgr.x (mgr.14150) 154 : cluster [DBG] pgmap v114: 1 pgs: 1 peering; 0 B data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:14.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:14 vm05 bash[20070]: cluster 2026-03-09T14:20:12.926416+0000 mgr.x (mgr.14150) 154 : cluster [DBG] pgmap v114: 1 pgs: 1 peering; 0 B data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:14.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:14 vm05 bash[20070]: cluster 2026-03-09T14:20:12.926416+0000 mgr.x (mgr.14150) 154 : cluster [DBG] pgmap v114: 1 pgs: 1 peering; 0 B data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:15.170 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:20:15.184 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph orch daemon add osd vm04:/dev/vdc 2026-03-09T14:20:15.484 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:15 vm04 bash[19581]: cephadm 2026-03-09T14:20:14.472852+0000 mgr.x (mgr.14150) 155 : cephadm [INF] Detected new or changed devices on vm04 2026-03-09T14:20:15.484 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:15 vm04 bash[19581]: cephadm 2026-03-09T14:20:14.472852+0000 mgr.x (mgr.14150) 155 : cephadm [INF] Detected new or changed devices on vm04 2026-03-09T14:20:15.484 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:15 vm04 bash[19581]: audit 2026-03-09T14:20:14.478606+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:15.484 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:15 vm04 bash[19581]: audit 2026-03-09T14:20:14.478606+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:15.484 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:15 vm04 bash[19581]: audit 2026-03-09T14:20:14.486660+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:15.484 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:15 vm04 bash[19581]: audit 2026-03-09T14:20:14.486660+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:15.484 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:15 vm04 bash[19581]: audit 2026-03-09T14:20:14.487538+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:20:15.484 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:15 vm04 bash[19581]: audit 2026-03-09T14:20:14.487538+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:20:15.484 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:15 vm04 bash[19581]: audit 2026-03-09T14:20:14.487937+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:20:15.484 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:15 vm04 bash[19581]: audit 2026-03-09T14:20:14.487937+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:20:15.484 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:15 vm04 bash[19581]: cephadm 2026-03-09T14:20:14.488175+0000 mgr.x (mgr.14150) 156 : cephadm [INF] Adjusting osd_memory_target on vm04 to 2275M 2026-03-09T14:20:15.484 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:15 vm04 bash[19581]: cephadm 2026-03-09T14:20:14.488175+0000 mgr.x (mgr.14150) 156 : cephadm [INF] Adjusting osd_memory_target on vm04 to 2275M 2026-03-09T14:20:15.484 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:15 vm04 bash[19581]: audit 2026-03-09T14:20:14.490971+0000 mon.a (mon.0) 461 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:15.484 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:15 vm04 bash[19581]: audit 2026-03-09T14:20:14.490971+0000 mon.a (mon.0) 461 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:15.484 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:15 vm04 bash[19581]: audit 2026-03-09T14:20:14.492719+0000 mon.a (mon.0) 462 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:15.484 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:15 vm04 bash[19581]: audit 2026-03-09T14:20:14.492719+0000 mon.a (mon.0) 462 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:15.484 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:15 vm04 bash[19581]: audit 2026-03-09T14:20:14.493048+0000 mon.a (mon.0) 463 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:20:15.484 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:15 vm04 bash[19581]: audit 2026-03-09T14:20:14.493048+0000 mon.a (mon.0) 463 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:20:15.484 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:15 vm04 bash[19581]: audit 2026-03-09T14:20:14.496369+0000 mon.a (mon.0) 464 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:15.484 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:15 vm04 bash[19581]: audit 2026-03-09T14:20:14.496369+0000 mon.a (mon.0) 464 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:15.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:15 vm05 bash[20070]: cephadm 2026-03-09T14:20:14.472852+0000 mgr.x (mgr.14150) 155 : cephadm [INF] Detected new or changed devices on vm04 2026-03-09T14:20:15.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:15 vm05 bash[20070]: cephadm 2026-03-09T14:20:14.472852+0000 mgr.x (mgr.14150) 155 : cephadm [INF] Detected new or changed devices on vm04 2026-03-09T14:20:15.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:15 vm05 bash[20070]: audit 2026-03-09T14:20:14.478606+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:15.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:15 vm05 bash[20070]: audit 2026-03-09T14:20:14.478606+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:15.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:15 vm05 bash[20070]: audit 2026-03-09T14:20:14.486660+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:15.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:15 vm05 bash[20070]: audit 2026-03-09T14:20:14.486660+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:15.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:15 vm05 bash[20070]: audit 2026-03-09T14:20:14.487538+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:20:15.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:15 vm05 bash[20070]: audit 2026-03-09T14:20:14.487538+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:20:15.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:15 vm05 bash[20070]: audit 2026-03-09T14:20:14.487937+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:20:15.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:15 vm05 bash[20070]: audit 2026-03-09T14:20:14.487937+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:20:15.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:15 vm05 bash[20070]: cephadm 2026-03-09T14:20:14.488175+0000 mgr.x (mgr.14150) 156 : cephadm [INF] Adjusting osd_memory_target on vm04 to 2275M 2026-03-09T14:20:15.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:15 vm05 bash[20070]: cephadm 2026-03-09T14:20:14.488175+0000 mgr.x (mgr.14150) 156 : cephadm [INF] Adjusting osd_memory_target on vm04 to 2275M 2026-03-09T14:20:15.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:15 vm05 bash[20070]: audit 2026-03-09T14:20:14.490971+0000 mon.a (mon.0) 461 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:15.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:15 vm05 bash[20070]: audit 2026-03-09T14:20:14.490971+0000 mon.a (mon.0) 461 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:15.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:15 vm05 bash[20070]: audit 2026-03-09T14:20:14.492719+0000 mon.a (mon.0) 462 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:15.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:15 vm05 bash[20070]: audit 2026-03-09T14:20:14.492719+0000 mon.a (mon.0) 462 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:15.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:15 vm05 bash[20070]: audit 2026-03-09T14:20:14.493048+0000 mon.a (mon.0) 463 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:20:15.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:15 vm05 bash[20070]: audit 2026-03-09T14:20:14.493048+0000 mon.a (mon.0) 463 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:20:15.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:15 vm05 bash[20070]: audit 2026-03-09T14:20:14.496369+0000 mon.a (mon.0) 464 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:15.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:15 vm05 bash[20070]: audit 2026-03-09T14:20:14.496369+0000 mon.a (mon.0) 464 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:15.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:15 vm03 bash[17524]: cephadm 2026-03-09T14:20:14.472852+0000 mgr.x (mgr.14150) 155 : cephadm [INF] Detected new or changed devices on vm04 2026-03-09T14:20:15.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:15 vm03 bash[17524]: cephadm 2026-03-09T14:20:14.472852+0000 mgr.x (mgr.14150) 155 : cephadm [INF] Detected new or changed devices on vm04 2026-03-09T14:20:15.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:15 vm03 bash[17524]: audit 2026-03-09T14:20:14.478606+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:15.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:15 vm03 bash[17524]: audit 2026-03-09T14:20:14.478606+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:15.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:15 vm03 bash[17524]: audit 2026-03-09T14:20:14.486660+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:15.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:15 vm03 bash[17524]: audit 2026-03-09T14:20:14.486660+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:15.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:15 vm03 bash[17524]: audit 2026-03-09T14:20:14.487538+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:20:15.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:15 vm03 bash[17524]: audit 2026-03-09T14:20:14.487538+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:20:15.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:15 vm03 bash[17524]: audit 2026-03-09T14:20:14.487937+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:20:15.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:15 vm03 bash[17524]: audit 2026-03-09T14:20:14.487937+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:20:15.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:15 vm03 bash[17524]: cephadm 2026-03-09T14:20:14.488175+0000 mgr.x (mgr.14150) 156 : cephadm [INF] Adjusting osd_memory_target on vm04 to 2275M 2026-03-09T14:20:15.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:15 vm03 bash[17524]: cephadm 2026-03-09T14:20:14.488175+0000 mgr.x (mgr.14150) 156 : cephadm [INF] Adjusting osd_memory_target on vm04 to 2275M 2026-03-09T14:20:15.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:15 vm03 bash[17524]: audit 2026-03-09T14:20:14.490971+0000 mon.a (mon.0) 461 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:15.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:15 vm03 bash[17524]: audit 2026-03-09T14:20:14.490971+0000 mon.a (mon.0) 461 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:15.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:15 vm03 bash[17524]: audit 2026-03-09T14:20:14.492719+0000 mon.a (mon.0) 462 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:15.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:15 vm03 bash[17524]: audit 2026-03-09T14:20:14.492719+0000 mon.a (mon.0) 462 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:15.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:15 vm03 bash[17524]: audit 2026-03-09T14:20:14.493048+0000 mon.a (mon.0) 463 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:20:15.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:15 vm03 bash[17524]: audit 2026-03-09T14:20:14.493048+0000 mon.a (mon.0) 463 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:20:15.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:15 vm03 bash[17524]: audit 2026-03-09T14:20:14.496369+0000 mon.a (mon.0) 464 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:15.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:15 vm03 bash[17524]: audit 2026-03-09T14:20:14.496369+0000 mon.a (mon.0) 464 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:16.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:16 vm04 bash[19581]: cluster 2026-03-09T14:20:14.926630+0000 mgr.x (mgr.14150) 157 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:16.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:16 vm04 bash[19581]: cluster 2026-03-09T14:20:14.926630+0000 mgr.x (mgr.14150) 157 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:16.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:16 vm05 bash[20070]: cluster 2026-03-09T14:20:14.926630+0000 mgr.x (mgr.14150) 157 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:16.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:16 vm05 bash[20070]: cluster 2026-03-09T14:20:14.926630+0000 mgr.x (mgr.14150) 157 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:16.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:16 vm03 bash[17524]: cluster 2026-03-09T14:20:14.926630+0000 mgr.x (mgr.14150) 157 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:16.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:16 vm03 bash[17524]: cluster 2026-03-09T14:20:14.926630+0000 mgr.x (mgr.14150) 157 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:18.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:18 vm04 bash[19581]: cluster 2026-03-09T14:20:16.926874+0000 mgr.x (mgr.14150) 158 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:18.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:18 vm04 bash[19581]: cluster 2026-03-09T14:20:16.926874+0000 mgr.x (mgr.14150) 158 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:18.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:18 vm05 bash[20070]: cluster 2026-03-09T14:20:16.926874+0000 mgr.x (mgr.14150) 158 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:18.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:18 vm05 bash[20070]: cluster 2026-03-09T14:20:16.926874+0000 mgr.x (mgr.14150) 158 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:18.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:18 vm03 bash[17524]: cluster 2026-03-09T14:20:16.926874+0000 mgr.x (mgr.14150) 158 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:18.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:18 vm03 bash[17524]: cluster 2026-03-09T14:20:16.926874+0000 mgr.x (mgr.14150) 158 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:19.801 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.b/config 2026-03-09T14:20:20.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:20 vm03 bash[17524]: cluster 2026-03-09T14:20:18.927218+0000 mgr.x (mgr.14150) 159 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:20.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:20 vm03 bash[17524]: cluster 2026-03-09T14:20:18.927218+0000 mgr.x (mgr.14150) 159 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:20.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:20 vm03 bash[17524]: audit 2026-03-09T14:20:20.047725+0000 mon.a (mon.0) 465 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:20:20.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:20 vm03 bash[17524]: audit 2026-03-09T14:20:20.047725+0000 mon.a (mon.0) 465 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:20:20.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:20 vm03 bash[17524]: audit 2026-03-09T14:20:20.049136+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:20:20.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:20 vm03 bash[17524]: audit 2026-03-09T14:20:20.049136+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:20:20.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:20 vm03 bash[17524]: audit 2026-03-09T14:20:20.049847+0000 mon.a (mon.0) 467 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:20.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:20 vm03 bash[17524]: audit 2026-03-09T14:20:20.049847+0000 mon.a (mon.0) 467 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:21.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:20 vm04 bash[19581]: cluster 2026-03-09T14:20:18.927218+0000 mgr.x (mgr.14150) 159 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:21.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:20 vm04 bash[19581]: cluster 2026-03-09T14:20:18.927218+0000 mgr.x (mgr.14150) 159 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:21.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:20 vm04 bash[19581]: audit 2026-03-09T14:20:20.047725+0000 mon.a (mon.0) 465 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:20:21.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:20 vm04 bash[19581]: audit 2026-03-09T14:20:20.047725+0000 mon.a (mon.0) 465 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:20:21.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:20 vm04 bash[19581]: audit 2026-03-09T14:20:20.049136+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:20:21.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:20 vm04 bash[19581]: audit 2026-03-09T14:20:20.049136+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:20:21.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:20 vm04 bash[19581]: audit 2026-03-09T14:20:20.049847+0000 mon.a (mon.0) 467 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:21.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:20 vm04 bash[19581]: audit 2026-03-09T14:20:20.049847+0000 mon.a (mon.0) 467 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:21.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:20 vm05 bash[20070]: cluster 2026-03-09T14:20:18.927218+0000 mgr.x (mgr.14150) 159 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:21.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:20 vm05 bash[20070]: cluster 2026-03-09T14:20:18.927218+0000 mgr.x (mgr.14150) 159 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:21.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:20 vm05 bash[20070]: audit 2026-03-09T14:20:20.047725+0000 mon.a (mon.0) 465 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:20:21.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:20 vm05 bash[20070]: audit 2026-03-09T14:20:20.047725+0000 mon.a (mon.0) 465 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:20:21.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:20 vm05 bash[20070]: audit 2026-03-09T14:20:20.049136+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:20:21.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:20 vm05 bash[20070]: audit 2026-03-09T14:20:20.049136+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:20:21.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:20 vm05 bash[20070]: audit 2026-03-09T14:20:20.049847+0000 mon.a (mon.0) 467 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:21.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:20 vm05 bash[20070]: audit 2026-03-09T14:20:20.049847+0000 mon.a (mon.0) 467 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:21.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:21 vm03 bash[17524]: audit 2026-03-09T14:20:20.046402+0000 mgr.x (mgr.14150) 160 : audit [DBG] from='client.14313 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:20:21.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:21 vm03 bash[17524]: audit 2026-03-09T14:20:20.046402+0000 mgr.x (mgr.14150) 160 : audit [DBG] from='client.14313 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:20:22.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:21 vm04 bash[19581]: audit 2026-03-09T14:20:20.046402+0000 mgr.x (mgr.14150) 160 : audit [DBG] from='client.14313 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:20:22.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:21 vm04 bash[19581]: audit 2026-03-09T14:20:20.046402+0000 mgr.x (mgr.14150) 160 : audit [DBG] from='client.14313 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:20:22.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:21 vm05 bash[20070]: audit 2026-03-09T14:20:20.046402+0000 mgr.x (mgr.14150) 160 : audit [DBG] from='client.14313 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:20:22.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:21 vm05 bash[20070]: audit 2026-03-09T14:20:20.046402+0000 mgr.x (mgr.14150) 160 : audit [DBG] from='client.14313 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:20:22.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:22 vm03 bash[17524]: cluster 2026-03-09T14:20:20.927453+0000 mgr.x (mgr.14150) 161 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:22.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:22 vm03 bash[17524]: cluster 2026-03-09T14:20:20.927453+0000 mgr.x (mgr.14150) 161 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:23.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:22 vm04 bash[19581]: cluster 2026-03-09T14:20:20.927453+0000 mgr.x (mgr.14150) 161 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:23.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:22 vm04 bash[19581]: cluster 2026-03-09T14:20:20.927453+0000 mgr.x (mgr.14150) 161 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:23.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:22 vm05 bash[20070]: cluster 2026-03-09T14:20:20.927453+0000 mgr.x (mgr.14150) 161 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:23.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:22 vm05 bash[20070]: cluster 2026-03-09T14:20:20.927453+0000 mgr.x (mgr.14150) 161 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:24.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:24 vm03 bash[17524]: cluster 2026-03-09T14:20:22.927665+0000 mgr.x (mgr.14150) 162 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:24.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:24 vm03 bash[17524]: cluster 2026-03-09T14:20:22.927665+0000 mgr.x (mgr.14150) 162 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:24.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:24 vm03 bash[17524]: audit 2026-03-09T14:20:24.424352+0000 mon.b (mon.2) 15 : audit [INF] from='client.? 192.168.123.104:0/279925849' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "97a3c763-32a2-413f-8d3f-0e7163f512ed"}]: dispatch 2026-03-09T14:20:24.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:24 vm03 bash[17524]: audit 2026-03-09T14:20:24.424352+0000 mon.b (mon.2) 15 : audit [INF] from='client.? 192.168.123.104:0/279925849' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "97a3c763-32a2-413f-8d3f-0e7163f512ed"}]: dispatch 2026-03-09T14:20:24.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:24 vm03 bash[17524]: audit 2026-03-09T14:20:24.425601+0000 mon.a (mon.0) 468 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "97a3c763-32a2-413f-8d3f-0e7163f512ed"}]: dispatch 2026-03-09T14:20:24.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:24 vm03 bash[17524]: audit 2026-03-09T14:20:24.425601+0000 mon.a (mon.0) 468 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "97a3c763-32a2-413f-8d3f-0e7163f512ed"}]: dispatch 2026-03-09T14:20:24.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:24 vm03 bash[17524]: audit 2026-03-09T14:20:24.428860+0000 mon.a (mon.0) 469 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "97a3c763-32a2-413f-8d3f-0e7163f512ed"}]': finished 2026-03-09T14:20:24.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:24 vm03 bash[17524]: audit 2026-03-09T14:20:24.428860+0000 mon.a (mon.0) 469 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "97a3c763-32a2-413f-8d3f-0e7163f512ed"}]': finished 2026-03-09T14:20:24.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:24 vm03 bash[17524]: cluster 2026-03-09T14:20:24.431993+0000 mon.a (mon.0) 470 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-09T14:20:24.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:24 vm03 bash[17524]: cluster 2026-03-09T14:20:24.431993+0000 mon.a (mon.0) 470 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-09T14:20:24.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:24 vm03 bash[17524]: audit 2026-03-09T14:20:24.432353+0000 mon.a (mon.0) 471 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:20:24.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:24 vm03 bash[17524]: audit 2026-03-09T14:20:24.432353+0000 mon.a (mon.0) 471 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:20:25.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:24 vm04 bash[19581]: cluster 2026-03-09T14:20:22.927665+0000 mgr.x (mgr.14150) 162 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:25.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:24 vm04 bash[19581]: cluster 2026-03-09T14:20:22.927665+0000 mgr.x (mgr.14150) 162 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:25.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:24 vm04 bash[19581]: audit 2026-03-09T14:20:24.424352+0000 mon.b (mon.2) 15 : audit [INF] from='client.? 192.168.123.104:0/279925849' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "97a3c763-32a2-413f-8d3f-0e7163f512ed"}]: dispatch 2026-03-09T14:20:25.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:24 vm04 bash[19581]: audit 2026-03-09T14:20:24.424352+0000 mon.b (mon.2) 15 : audit [INF] from='client.? 192.168.123.104:0/279925849' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "97a3c763-32a2-413f-8d3f-0e7163f512ed"}]: dispatch 2026-03-09T14:20:25.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:24 vm04 bash[19581]: audit 2026-03-09T14:20:24.425601+0000 mon.a (mon.0) 468 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "97a3c763-32a2-413f-8d3f-0e7163f512ed"}]: dispatch 2026-03-09T14:20:25.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:24 vm04 bash[19581]: audit 2026-03-09T14:20:24.425601+0000 mon.a (mon.0) 468 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "97a3c763-32a2-413f-8d3f-0e7163f512ed"}]: dispatch 2026-03-09T14:20:25.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:24 vm04 bash[19581]: audit 2026-03-09T14:20:24.428860+0000 mon.a (mon.0) 469 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "97a3c763-32a2-413f-8d3f-0e7163f512ed"}]': finished 2026-03-09T14:20:25.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:24 vm04 bash[19581]: audit 2026-03-09T14:20:24.428860+0000 mon.a (mon.0) 469 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "97a3c763-32a2-413f-8d3f-0e7163f512ed"}]': finished 2026-03-09T14:20:25.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:24 vm04 bash[19581]: cluster 2026-03-09T14:20:24.431993+0000 mon.a (mon.0) 470 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-09T14:20:25.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:24 vm04 bash[19581]: cluster 2026-03-09T14:20:24.431993+0000 mon.a (mon.0) 470 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-09T14:20:25.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:24 vm04 bash[19581]: audit 2026-03-09T14:20:24.432353+0000 mon.a (mon.0) 471 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:20:25.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:24 vm04 bash[19581]: audit 2026-03-09T14:20:24.432353+0000 mon.a (mon.0) 471 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:20:25.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:24 vm05 bash[20070]: cluster 2026-03-09T14:20:22.927665+0000 mgr.x (mgr.14150) 162 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:25.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:24 vm05 bash[20070]: cluster 2026-03-09T14:20:22.927665+0000 mgr.x (mgr.14150) 162 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:25.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:24 vm05 bash[20070]: audit 2026-03-09T14:20:24.424352+0000 mon.b (mon.2) 15 : audit [INF] from='client.? 192.168.123.104:0/279925849' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "97a3c763-32a2-413f-8d3f-0e7163f512ed"}]: dispatch 2026-03-09T14:20:25.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:24 vm05 bash[20070]: audit 2026-03-09T14:20:24.424352+0000 mon.b (mon.2) 15 : audit [INF] from='client.? 192.168.123.104:0/279925849' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "97a3c763-32a2-413f-8d3f-0e7163f512ed"}]: dispatch 2026-03-09T14:20:25.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:24 vm05 bash[20070]: audit 2026-03-09T14:20:24.425601+0000 mon.a (mon.0) 468 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "97a3c763-32a2-413f-8d3f-0e7163f512ed"}]: dispatch 2026-03-09T14:20:25.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:24 vm05 bash[20070]: audit 2026-03-09T14:20:24.425601+0000 mon.a (mon.0) 468 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "97a3c763-32a2-413f-8d3f-0e7163f512ed"}]: dispatch 2026-03-09T14:20:25.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:24 vm05 bash[20070]: audit 2026-03-09T14:20:24.428860+0000 mon.a (mon.0) 469 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "97a3c763-32a2-413f-8d3f-0e7163f512ed"}]': finished 2026-03-09T14:20:25.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:24 vm05 bash[20070]: audit 2026-03-09T14:20:24.428860+0000 mon.a (mon.0) 469 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "97a3c763-32a2-413f-8d3f-0e7163f512ed"}]': finished 2026-03-09T14:20:25.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:24 vm05 bash[20070]: cluster 2026-03-09T14:20:24.431993+0000 mon.a (mon.0) 470 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-09T14:20:25.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:24 vm05 bash[20070]: cluster 2026-03-09T14:20:24.431993+0000 mon.a (mon.0) 470 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-09T14:20:25.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:24 vm05 bash[20070]: audit 2026-03-09T14:20:24.432353+0000 mon.a (mon.0) 471 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:20:25.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:24 vm05 bash[20070]: audit 2026-03-09T14:20:24.432353+0000 mon.a (mon.0) 471 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:20:25.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:25 vm03 bash[17524]: audit 2026-03-09T14:20:25.002476+0000 mon.b (mon.2) 16 : audit [DBG] from='client.? 192.168.123.104:0/645007003' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:20:25.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:25 vm03 bash[17524]: audit 2026-03-09T14:20:25.002476+0000 mon.b (mon.2) 16 : audit [DBG] from='client.? 192.168.123.104:0/645007003' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:20:26.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:25 vm04 bash[19581]: audit 2026-03-09T14:20:25.002476+0000 mon.b (mon.2) 16 : audit [DBG] from='client.? 192.168.123.104:0/645007003' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:20:26.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:25 vm04 bash[19581]: audit 2026-03-09T14:20:25.002476+0000 mon.b (mon.2) 16 : audit [DBG] from='client.? 192.168.123.104:0/645007003' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:20:26.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:25 vm05 bash[20070]: audit 2026-03-09T14:20:25.002476+0000 mon.b (mon.2) 16 : audit [DBG] from='client.? 192.168.123.104:0/645007003' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:20:26.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:25 vm05 bash[20070]: audit 2026-03-09T14:20:25.002476+0000 mon.b (mon.2) 16 : audit [DBG] from='client.? 192.168.123.104:0/645007003' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:20:26.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:26 vm03 bash[17524]: cluster 2026-03-09T14:20:24.927902+0000 mgr.x (mgr.14150) 163 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:26.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:26 vm03 bash[17524]: cluster 2026-03-09T14:20:24.927902+0000 mgr.x (mgr.14150) 163 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:27.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:26 vm05 bash[20070]: cluster 2026-03-09T14:20:24.927902+0000 mgr.x (mgr.14150) 163 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:27.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:26 vm05 bash[20070]: cluster 2026-03-09T14:20:24.927902+0000 mgr.x (mgr.14150) 163 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:27.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:26 vm04 bash[19581]: cluster 2026-03-09T14:20:24.927902+0000 mgr.x (mgr.14150) 163 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:27.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:26 vm04 bash[19581]: cluster 2026-03-09T14:20:24.927902+0000 mgr.x (mgr.14150) 163 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:28.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:28 vm03 bash[17524]: cluster 2026-03-09T14:20:26.928170+0000 mgr.x (mgr.14150) 164 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:28.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:28 vm03 bash[17524]: cluster 2026-03-09T14:20:26.928170+0000 mgr.x (mgr.14150) 164 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:29.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:28 vm04 bash[19581]: cluster 2026-03-09T14:20:26.928170+0000 mgr.x (mgr.14150) 164 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:29.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:28 vm04 bash[19581]: cluster 2026-03-09T14:20:26.928170+0000 mgr.x (mgr.14150) 164 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:29.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:28 vm05 bash[20070]: cluster 2026-03-09T14:20:26.928170+0000 mgr.x (mgr.14150) 164 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:29.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:28 vm05 bash[20070]: cluster 2026-03-09T14:20:26.928170+0000 mgr.x (mgr.14150) 164 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:30.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:29 vm04 bash[19581]: cluster 2026-03-09T14:20:28.928547+0000 mgr.x (mgr.14150) 165 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:30.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:29 vm04 bash[19581]: cluster 2026-03-09T14:20:28.928547+0000 mgr.x (mgr.14150) 165 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:30.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:29 vm05 bash[20070]: cluster 2026-03-09T14:20:28.928547+0000 mgr.x (mgr.14150) 165 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:30.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:29 vm05 bash[20070]: cluster 2026-03-09T14:20:28.928547+0000 mgr.x (mgr.14150) 165 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:30.052 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:29 vm03 bash[17524]: cluster 2026-03-09T14:20:28.928547+0000 mgr.x (mgr.14150) 165 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:30.052 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:29 vm03 bash[17524]: cluster 2026-03-09T14:20:28.928547+0000 mgr.x (mgr.14150) 165 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:32.257 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:31 vm04 bash[19581]: cluster 2026-03-09T14:20:30.929085+0000 mgr.x (mgr.14150) 166 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:32.257 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:31 vm04 bash[19581]: cluster 2026-03-09T14:20:30.929085+0000 mgr.x (mgr.14150) 166 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:32.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:31 vm05 bash[20070]: cluster 2026-03-09T14:20:30.929085+0000 mgr.x (mgr.14150) 166 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:32.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:31 vm05 bash[20070]: cluster 2026-03-09T14:20:30.929085+0000 mgr.x (mgr.14150) 166 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:32.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:31 vm03 bash[17524]: cluster 2026-03-09T14:20:30.929085+0000 mgr.x (mgr.14150) 166 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:32.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:31 vm03 bash[17524]: cluster 2026-03-09T14:20:30.929085+0000 mgr.x (mgr.14150) 166 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:33.882 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:33 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:20:33.882 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:20:33 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:20:33.882 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:20:33 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:20:34.132 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:33 vm04 bash[19581]: cluster 2026-03-09T14:20:32.929485+0000 mgr.x (mgr.14150) 167 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:34.132 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:33 vm04 bash[19581]: cluster 2026-03-09T14:20:32.929485+0000 mgr.x (mgr.14150) 167 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:34.132 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:33 vm04 bash[19581]: audit 2026-03-09T14:20:33.117899+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T14:20:34.132 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:33 vm04 bash[19581]: audit 2026-03-09T14:20:33.117899+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T14:20:34.132 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:33 vm04 bash[19581]: audit 2026-03-09T14:20:33.118450+0000 mon.a (mon.0) 473 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:34.133 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:33 vm04 bash[19581]: audit 2026-03-09T14:20:33.118450+0000 mon.a (mon.0) 473 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:34.133 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:33 vm04 bash[19581]: cephadm 2026-03-09T14:20:33.118860+0000 mgr.x (mgr.14150) 168 : cephadm [INF] Deploying daemon osd.4 on vm04 2026-03-09T14:20:34.133 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:33 vm04 bash[19581]: cephadm 2026-03-09T14:20:33.118860+0000 mgr.x (mgr.14150) 168 : cephadm [INF] Deploying daemon osd.4 on vm04 2026-03-09T14:20:34.133 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:34 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:20:34.133 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:20:34 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:20:34.133 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:20:34 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:20:34.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:33 vm05 bash[20070]: cluster 2026-03-09T14:20:32.929485+0000 mgr.x (mgr.14150) 167 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:34.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:33 vm05 bash[20070]: cluster 2026-03-09T14:20:32.929485+0000 mgr.x (mgr.14150) 167 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:34.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:33 vm05 bash[20070]: audit 2026-03-09T14:20:33.117899+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T14:20:34.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:33 vm05 bash[20070]: audit 2026-03-09T14:20:33.117899+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T14:20:34.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:33 vm05 bash[20070]: audit 2026-03-09T14:20:33.118450+0000 mon.a (mon.0) 473 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:34.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:33 vm05 bash[20070]: audit 2026-03-09T14:20:33.118450+0000 mon.a (mon.0) 473 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:34.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:33 vm05 bash[20070]: cephadm 2026-03-09T14:20:33.118860+0000 mgr.x (mgr.14150) 168 : cephadm [INF] Deploying daemon osd.4 on vm04 2026-03-09T14:20:34.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:33 vm05 bash[20070]: cephadm 2026-03-09T14:20:33.118860+0000 mgr.x (mgr.14150) 168 : cephadm [INF] Deploying daemon osd.4 on vm04 2026-03-09T14:20:34.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:33 vm03 bash[17524]: cluster 2026-03-09T14:20:32.929485+0000 mgr.x (mgr.14150) 167 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:34.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:33 vm03 bash[17524]: cluster 2026-03-09T14:20:32.929485+0000 mgr.x (mgr.14150) 167 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:34.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:33 vm03 bash[17524]: audit 2026-03-09T14:20:33.117899+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T14:20:34.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:33 vm03 bash[17524]: audit 2026-03-09T14:20:33.117899+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T14:20:34.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:33 vm03 bash[17524]: audit 2026-03-09T14:20:33.118450+0000 mon.a (mon.0) 473 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:34.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:33 vm03 bash[17524]: audit 2026-03-09T14:20:33.118450+0000 mon.a (mon.0) 473 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:34.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:33 vm03 bash[17524]: cephadm 2026-03-09T14:20:33.118860+0000 mgr.x (mgr.14150) 168 : cephadm [INF] Deploying daemon osd.4 on vm04 2026-03-09T14:20:34.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:33 vm03 bash[17524]: cephadm 2026-03-09T14:20:33.118860+0000 mgr.x (mgr.14150) 168 : cephadm [INF] Deploying daemon osd.4 on vm04 2026-03-09T14:20:35.247 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:34 vm04 bash[19581]: audit 2026-03-09T14:20:34.112188+0000 mon.a (mon.0) 474 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:20:35.248 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:34 vm04 bash[19581]: audit 2026-03-09T14:20:34.112188+0000 mon.a (mon.0) 474 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:20:35.248 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:34 vm04 bash[19581]: audit 2026-03-09T14:20:34.118072+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:35.248 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:34 vm04 bash[19581]: audit 2026-03-09T14:20:34.118072+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:35.248 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:34 vm04 bash[19581]: audit 2026-03-09T14:20:34.122101+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:35.248 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:34 vm04 bash[19581]: audit 2026-03-09T14:20:34.122101+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:35.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:34 vm05 bash[20070]: audit 2026-03-09T14:20:34.112188+0000 mon.a (mon.0) 474 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:20:35.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:34 vm05 bash[20070]: audit 2026-03-09T14:20:34.112188+0000 mon.a (mon.0) 474 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:20:35.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:34 vm05 bash[20070]: audit 2026-03-09T14:20:34.118072+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:35.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:34 vm05 bash[20070]: audit 2026-03-09T14:20:34.118072+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:35.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:34 vm05 bash[20070]: audit 2026-03-09T14:20:34.122101+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:35.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:34 vm05 bash[20070]: audit 2026-03-09T14:20:34.122101+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:35.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:34 vm03 bash[17524]: audit 2026-03-09T14:20:34.112188+0000 mon.a (mon.0) 474 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:20:35.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:34 vm03 bash[17524]: audit 2026-03-09T14:20:34.112188+0000 mon.a (mon.0) 474 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:20:35.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:34 vm03 bash[17524]: audit 2026-03-09T14:20:34.118072+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:35.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:34 vm03 bash[17524]: audit 2026-03-09T14:20:34.118072+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:35.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:34 vm03 bash[17524]: audit 2026-03-09T14:20:34.122101+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:35.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:34 vm03 bash[17524]: audit 2026-03-09T14:20:34.122101+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:36.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:36 vm04 bash[19581]: cluster 2026-03-09T14:20:34.929708+0000 mgr.x (mgr.14150) 169 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:36.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:36 vm04 bash[19581]: cluster 2026-03-09T14:20:34.929708+0000 mgr.x (mgr.14150) 169 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:36.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:36 vm05 bash[20070]: cluster 2026-03-09T14:20:34.929708+0000 mgr.x (mgr.14150) 169 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:36.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:36 vm05 bash[20070]: cluster 2026-03-09T14:20:34.929708+0000 mgr.x (mgr.14150) 169 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:36.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:36 vm03 bash[17524]: cluster 2026-03-09T14:20:34.929708+0000 mgr.x (mgr.14150) 169 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:36.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:36 vm03 bash[17524]: cluster 2026-03-09T14:20:34.929708+0000 mgr.x (mgr.14150) 169 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:38.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:38 vm04 bash[19581]: cluster 2026-03-09T14:20:36.929955+0000 mgr.x (mgr.14150) 170 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:38.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:38 vm04 bash[19581]: cluster 2026-03-09T14:20:36.929955+0000 mgr.x (mgr.14150) 170 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:38.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:38 vm04 bash[19581]: audit 2026-03-09T14:20:37.351122+0000 mon.c (mon.1) 8 : audit [INF] from='osd.4 [v2:192.168.123.104:6816/3814952582,v1:192.168.123.104:6817/3814952582]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T14:20:38.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:38 vm04 bash[19581]: audit 2026-03-09T14:20:37.351122+0000 mon.c (mon.1) 8 : audit [INF] from='osd.4 [v2:192.168.123.104:6816/3814952582,v1:192.168.123.104:6817/3814952582]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T14:20:38.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:38 vm04 bash[19581]: audit 2026-03-09T14:20:37.351869+0000 mon.a (mon.0) 477 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T14:20:38.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:38 vm04 bash[19581]: audit 2026-03-09T14:20:37.351869+0000 mon.a (mon.0) 477 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T14:20:38.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:38 vm05 bash[20070]: cluster 2026-03-09T14:20:36.929955+0000 mgr.x (mgr.14150) 170 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:38.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:38 vm05 bash[20070]: cluster 2026-03-09T14:20:36.929955+0000 mgr.x (mgr.14150) 170 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:38.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:38 vm05 bash[20070]: audit 2026-03-09T14:20:37.351122+0000 mon.c (mon.1) 8 : audit [INF] from='osd.4 [v2:192.168.123.104:6816/3814952582,v1:192.168.123.104:6817/3814952582]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T14:20:38.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:38 vm05 bash[20070]: audit 2026-03-09T14:20:37.351122+0000 mon.c (mon.1) 8 : audit [INF] from='osd.4 [v2:192.168.123.104:6816/3814952582,v1:192.168.123.104:6817/3814952582]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T14:20:38.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:38 vm05 bash[20070]: audit 2026-03-09T14:20:37.351869+0000 mon.a (mon.0) 477 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T14:20:38.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:38 vm05 bash[20070]: audit 2026-03-09T14:20:37.351869+0000 mon.a (mon.0) 477 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T14:20:38.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:38 vm03 bash[17524]: cluster 2026-03-09T14:20:36.929955+0000 mgr.x (mgr.14150) 170 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:38.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:38 vm03 bash[17524]: cluster 2026-03-09T14:20:36.929955+0000 mgr.x (mgr.14150) 170 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:38.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:38 vm03 bash[17524]: audit 2026-03-09T14:20:37.351122+0000 mon.c (mon.1) 8 : audit [INF] from='osd.4 [v2:192.168.123.104:6816/3814952582,v1:192.168.123.104:6817/3814952582]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T14:20:38.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:38 vm03 bash[17524]: audit 2026-03-09T14:20:37.351122+0000 mon.c (mon.1) 8 : audit [INF] from='osd.4 [v2:192.168.123.104:6816/3814952582,v1:192.168.123.104:6817/3814952582]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T14:20:38.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:38 vm03 bash[17524]: audit 2026-03-09T14:20:37.351869+0000 mon.a (mon.0) 477 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T14:20:38.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:38 vm03 bash[17524]: audit 2026-03-09T14:20:37.351869+0000 mon.a (mon.0) 477 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T14:20:39.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:39 vm04 bash[19581]: audit 2026-03-09T14:20:38.141109+0000 mon.a (mon.0) 478 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T14:20:39.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:39 vm04 bash[19581]: audit 2026-03-09T14:20:38.141109+0000 mon.a (mon.0) 478 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T14:20:39.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:39 vm04 bash[19581]: cluster 2026-03-09T14:20:38.143402+0000 mon.a (mon.0) 479 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-09T14:20:39.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:39 vm04 bash[19581]: cluster 2026-03-09T14:20:38.143402+0000 mon.a (mon.0) 479 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-09T14:20:39.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:39 vm04 bash[19581]: audit 2026-03-09T14:20:38.143594+0000 mon.a (mon.0) 480 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:20:39.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:39 vm04 bash[19581]: audit 2026-03-09T14:20:38.143594+0000 mon.a (mon.0) 480 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:20:39.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:39 vm04 bash[19581]: audit 2026-03-09T14:20:38.144076+0000 mon.c (mon.1) 9 : audit [INF] from='osd.4 [v2:192.168.123.104:6816/3814952582,v1:192.168.123.104:6817/3814952582]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:20:39.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:39 vm04 bash[19581]: audit 2026-03-09T14:20:38.144076+0000 mon.c (mon.1) 9 : audit [INF] from='osd.4 [v2:192.168.123.104:6816/3814952582,v1:192.168.123.104:6817/3814952582]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:20:39.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:39 vm04 bash[19581]: audit 2026-03-09T14:20:38.144832+0000 mon.a (mon.0) 481 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:20:39.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:39 vm04 bash[19581]: audit 2026-03-09T14:20:38.144832+0000 mon.a (mon.0) 481 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:20:39.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:39 vm05 bash[20070]: audit 2026-03-09T14:20:38.141109+0000 mon.a (mon.0) 478 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T14:20:39.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:39 vm05 bash[20070]: audit 2026-03-09T14:20:38.141109+0000 mon.a (mon.0) 478 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T14:20:39.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:39 vm05 bash[20070]: cluster 2026-03-09T14:20:38.143402+0000 mon.a (mon.0) 479 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-09T14:20:39.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:39 vm05 bash[20070]: cluster 2026-03-09T14:20:38.143402+0000 mon.a (mon.0) 479 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-09T14:20:39.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:39 vm05 bash[20070]: audit 2026-03-09T14:20:38.143594+0000 mon.a (mon.0) 480 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:20:39.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:39 vm05 bash[20070]: audit 2026-03-09T14:20:38.143594+0000 mon.a (mon.0) 480 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:20:39.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:39 vm05 bash[20070]: audit 2026-03-09T14:20:38.144076+0000 mon.c (mon.1) 9 : audit [INF] from='osd.4 [v2:192.168.123.104:6816/3814952582,v1:192.168.123.104:6817/3814952582]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:20:39.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:39 vm05 bash[20070]: audit 2026-03-09T14:20:38.144076+0000 mon.c (mon.1) 9 : audit [INF] from='osd.4 [v2:192.168.123.104:6816/3814952582,v1:192.168.123.104:6817/3814952582]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:20:39.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:39 vm05 bash[20070]: audit 2026-03-09T14:20:38.144832+0000 mon.a (mon.0) 481 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:20:39.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:39 vm05 bash[20070]: audit 2026-03-09T14:20:38.144832+0000 mon.a (mon.0) 481 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:20:39.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:39 vm03 bash[17524]: audit 2026-03-09T14:20:38.141109+0000 mon.a (mon.0) 478 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T14:20:39.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:39 vm03 bash[17524]: audit 2026-03-09T14:20:38.141109+0000 mon.a (mon.0) 478 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T14:20:39.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:39 vm03 bash[17524]: cluster 2026-03-09T14:20:38.143402+0000 mon.a (mon.0) 479 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-09T14:20:39.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:39 vm03 bash[17524]: cluster 2026-03-09T14:20:38.143402+0000 mon.a (mon.0) 479 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-09T14:20:39.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:39 vm03 bash[17524]: audit 2026-03-09T14:20:38.143594+0000 mon.a (mon.0) 480 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:20:39.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:39 vm03 bash[17524]: audit 2026-03-09T14:20:38.143594+0000 mon.a (mon.0) 480 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:20:39.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:39 vm03 bash[17524]: audit 2026-03-09T14:20:38.144076+0000 mon.c (mon.1) 9 : audit [INF] from='osd.4 [v2:192.168.123.104:6816/3814952582,v1:192.168.123.104:6817/3814952582]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:20:39.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:39 vm03 bash[17524]: audit 2026-03-09T14:20:38.144076+0000 mon.c (mon.1) 9 : audit [INF] from='osd.4 [v2:192.168.123.104:6816/3814952582,v1:192.168.123.104:6817/3814952582]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:20:39.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:39 vm03 bash[17524]: audit 2026-03-09T14:20:38.144832+0000 mon.a (mon.0) 481 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:20:39.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:39 vm03 bash[17524]: audit 2026-03-09T14:20:38.144832+0000 mon.a (mon.0) 481 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:20:40.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:40 vm05 bash[20070]: cluster 2026-03-09T14:20:38.930418+0000 mgr.x (mgr.14150) 171 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:40.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:40 vm05 bash[20070]: cluster 2026-03-09T14:20:38.930418+0000 mgr.x (mgr.14150) 171 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:40.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:40 vm05 bash[20070]: audit 2026-03-09T14:20:39.143868+0000 mon.a (mon.0) 482 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T14:20:40.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:40 vm05 bash[20070]: audit 2026-03-09T14:20:39.143868+0000 mon.a (mon.0) 482 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T14:20:40.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:40 vm05 bash[20070]: cluster 2026-03-09T14:20:39.146089+0000 mon.a (mon.0) 483 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-09T14:20:40.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:40 vm05 bash[20070]: cluster 2026-03-09T14:20:39.146089+0000 mon.a (mon.0) 483 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-09T14:20:40.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:40 vm05 bash[20070]: audit 2026-03-09T14:20:39.149725+0000 mon.a (mon.0) 484 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:20:40.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:40 vm05 bash[20070]: audit 2026-03-09T14:20:39.149725+0000 mon.a (mon.0) 484 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:20:40.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:40 vm05 bash[20070]: audit 2026-03-09T14:20:39.157446+0000 mon.a (mon.0) 485 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:20:40.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:40 vm05 bash[20070]: audit 2026-03-09T14:20:39.157446+0000 mon.a (mon.0) 485 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:20:40.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:40 vm05 bash[20070]: audit 2026-03-09T14:20:40.149547+0000 mon.a (mon.0) 486 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:20:40.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:40 vm05 bash[20070]: audit 2026-03-09T14:20:40.149547+0000 mon.a (mon.0) 486 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:20:40.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:40 vm04 bash[19581]: cluster 2026-03-09T14:20:38.930418+0000 mgr.x (mgr.14150) 171 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:40.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:40 vm04 bash[19581]: cluster 2026-03-09T14:20:38.930418+0000 mgr.x (mgr.14150) 171 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:40.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:40 vm04 bash[19581]: audit 2026-03-09T14:20:39.143868+0000 mon.a (mon.0) 482 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T14:20:40.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:40 vm04 bash[19581]: audit 2026-03-09T14:20:39.143868+0000 mon.a (mon.0) 482 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T14:20:40.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:40 vm04 bash[19581]: cluster 2026-03-09T14:20:39.146089+0000 mon.a (mon.0) 483 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-09T14:20:40.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:40 vm04 bash[19581]: cluster 2026-03-09T14:20:39.146089+0000 mon.a (mon.0) 483 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-09T14:20:40.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:40 vm04 bash[19581]: audit 2026-03-09T14:20:39.149725+0000 mon.a (mon.0) 484 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:20:40.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:40 vm04 bash[19581]: audit 2026-03-09T14:20:39.149725+0000 mon.a (mon.0) 484 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:20:40.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:40 vm04 bash[19581]: audit 2026-03-09T14:20:39.157446+0000 mon.a (mon.0) 485 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:20:40.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:40 vm04 bash[19581]: audit 2026-03-09T14:20:39.157446+0000 mon.a (mon.0) 485 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:20:40.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:40 vm04 bash[19581]: audit 2026-03-09T14:20:40.149547+0000 mon.a (mon.0) 486 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:20:40.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:40 vm04 bash[19581]: audit 2026-03-09T14:20:40.149547+0000 mon.a (mon.0) 486 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:20:40.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:40 vm03 bash[17524]: cluster 2026-03-09T14:20:38.930418+0000 mgr.x (mgr.14150) 171 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:40.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:40 vm03 bash[17524]: cluster 2026-03-09T14:20:38.930418+0000 mgr.x (mgr.14150) 171 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:40.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:40 vm03 bash[17524]: audit 2026-03-09T14:20:39.143868+0000 mon.a (mon.0) 482 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T14:20:40.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:40 vm03 bash[17524]: audit 2026-03-09T14:20:39.143868+0000 mon.a (mon.0) 482 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T14:20:40.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:40 vm03 bash[17524]: cluster 2026-03-09T14:20:39.146089+0000 mon.a (mon.0) 483 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-09T14:20:40.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:40 vm03 bash[17524]: cluster 2026-03-09T14:20:39.146089+0000 mon.a (mon.0) 483 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-09T14:20:40.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:40 vm03 bash[17524]: audit 2026-03-09T14:20:39.149725+0000 mon.a (mon.0) 484 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:20:40.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:40 vm03 bash[17524]: audit 2026-03-09T14:20:39.149725+0000 mon.a (mon.0) 484 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:20:40.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:40 vm03 bash[17524]: audit 2026-03-09T14:20:39.157446+0000 mon.a (mon.0) 485 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:20:40.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:40 vm03 bash[17524]: audit 2026-03-09T14:20:39.157446+0000 mon.a (mon.0) 485 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:20:40.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:40 vm03 bash[17524]: audit 2026-03-09T14:20:40.149547+0000 mon.a (mon.0) 486 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:20:40.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:40 vm03 bash[17524]: audit 2026-03-09T14:20:40.149547+0000 mon.a (mon.0) 486 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:20:41.309 INFO:teuthology.orchestra.run.vm04.stdout:Created osd(s) 4 on host 'vm04' 2026-03-09T14:20:41.321 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:41 vm04 bash[19581]: cluster 2026-03-09T14:20:38.397606+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:20:41.321 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:41 vm04 bash[19581]: cluster 2026-03-09T14:20:38.397606+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:20:41.321 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:41 vm04 bash[19581]: cluster 2026-03-09T14:20:38.397647+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:20:41.321 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:41 vm04 bash[19581]: cluster 2026-03-09T14:20:38.397647+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:20:41.321 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:41 vm04 bash[19581]: cluster 2026-03-09T14:20:40.158705+0000 mon.a (mon.0) 487 : cluster [INF] osd.4 [v2:192.168.123.104:6816/3814952582,v1:192.168.123.104:6817/3814952582] boot 2026-03-09T14:20:41.321 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:41 vm04 bash[19581]: cluster 2026-03-09T14:20:40.158705+0000 mon.a (mon.0) 487 : cluster [INF] osd.4 [v2:192.168.123.104:6816/3814952582,v1:192.168.123.104:6817/3814952582] boot 2026-03-09T14:20:41.321 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:41 vm04 bash[19581]: cluster 2026-03-09T14:20:40.158766+0000 mon.a (mon.0) 488 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-09T14:20:41.321 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:41 vm04 bash[19581]: cluster 2026-03-09T14:20:40.158766+0000 mon.a (mon.0) 488 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-09T14:20:41.321 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:41 vm04 bash[19581]: audit 2026-03-09T14:20:40.184567+0000 mon.a (mon.0) 489 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:20:41.321 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:41 vm04 bash[19581]: audit 2026-03-09T14:20:40.184567+0000 mon.a (mon.0) 489 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:20:41.321 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:41 vm04 bash[19581]: audit 2026-03-09T14:20:40.268469+0000 mon.a (mon.0) 490 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:41.321 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:41 vm04 bash[19581]: audit 2026-03-09T14:20:40.268469+0000 mon.a (mon.0) 490 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:41.321 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:41 vm04 bash[19581]: audit 2026-03-09T14:20:40.273331+0000 mon.a (mon.0) 491 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:41.322 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:41 vm04 bash[19581]: audit 2026-03-09T14:20:40.273331+0000 mon.a (mon.0) 491 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:41.322 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:41 vm04 bash[19581]: audit 2026-03-09T14:20:40.637187+0000 mon.a (mon.0) 492 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:41.322 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:41 vm04 bash[19581]: audit 2026-03-09T14:20:40.637187+0000 mon.a (mon.0) 492 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:41.322 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:41 vm04 bash[19581]: audit 2026-03-09T14:20:40.637665+0000 mon.a (mon.0) 493 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:20:41.322 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:41 vm04 bash[19581]: audit 2026-03-09T14:20:40.637665+0000 mon.a (mon.0) 493 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:20:41.322 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:41 vm04 bash[19581]: audit 2026-03-09T14:20:40.641895+0000 mon.a (mon.0) 494 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:41.322 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:41 vm04 bash[19581]: audit 2026-03-09T14:20:40.641895+0000 mon.a (mon.0) 494 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:41.396 DEBUG:teuthology.orchestra.run.vm04:osd.4> sudo journalctl -f -n 0 -u ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@osd.4.service 2026-03-09T14:20:41.397 INFO:tasks.cephadm:Deploying osd.5 on vm05 with /dev/vde... 2026-03-09T14:20:41.397 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- lvm zap /dev/vde 2026-03-09T14:20:41.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:41 vm05 bash[20070]: cluster 2026-03-09T14:20:38.397606+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:20:41.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:41 vm05 bash[20070]: cluster 2026-03-09T14:20:38.397606+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:20:41.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:41 vm05 bash[20070]: cluster 2026-03-09T14:20:38.397647+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:20:41.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:41 vm05 bash[20070]: cluster 2026-03-09T14:20:38.397647+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:20:41.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:41 vm05 bash[20070]: cluster 2026-03-09T14:20:40.158705+0000 mon.a (mon.0) 487 : cluster [INF] osd.4 [v2:192.168.123.104:6816/3814952582,v1:192.168.123.104:6817/3814952582] boot 2026-03-09T14:20:41.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:41 vm05 bash[20070]: cluster 2026-03-09T14:20:40.158705+0000 mon.a (mon.0) 487 : cluster [INF] osd.4 [v2:192.168.123.104:6816/3814952582,v1:192.168.123.104:6817/3814952582] boot 2026-03-09T14:20:41.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:41 vm05 bash[20070]: cluster 2026-03-09T14:20:40.158766+0000 mon.a (mon.0) 488 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-09T14:20:41.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:41 vm05 bash[20070]: cluster 2026-03-09T14:20:40.158766+0000 mon.a (mon.0) 488 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-09T14:20:41.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:41 vm05 bash[20070]: audit 2026-03-09T14:20:40.184567+0000 mon.a (mon.0) 489 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:20:41.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:41 vm05 bash[20070]: audit 2026-03-09T14:20:40.184567+0000 mon.a (mon.0) 489 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:20:41.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:41 vm05 bash[20070]: audit 2026-03-09T14:20:40.268469+0000 mon.a (mon.0) 490 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:41.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:41 vm05 bash[20070]: audit 2026-03-09T14:20:40.268469+0000 mon.a (mon.0) 490 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:41.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:41 vm05 bash[20070]: audit 2026-03-09T14:20:40.273331+0000 mon.a (mon.0) 491 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:41.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:41 vm05 bash[20070]: audit 2026-03-09T14:20:40.273331+0000 mon.a (mon.0) 491 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:41.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:41 vm05 bash[20070]: audit 2026-03-09T14:20:40.637187+0000 mon.a (mon.0) 492 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:41.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:41 vm05 bash[20070]: audit 2026-03-09T14:20:40.637187+0000 mon.a (mon.0) 492 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:41.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:41 vm05 bash[20070]: audit 2026-03-09T14:20:40.637665+0000 mon.a (mon.0) 493 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:20:41.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:41 vm05 bash[20070]: audit 2026-03-09T14:20:40.637665+0000 mon.a (mon.0) 493 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:20:41.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:41 vm05 bash[20070]: audit 2026-03-09T14:20:40.641895+0000 mon.a (mon.0) 494 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:41.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:41 vm05 bash[20070]: audit 2026-03-09T14:20:40.641895+0000 mon.a (mon.0) 494 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:41.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:41 vm03 bash[17524]: cluster 2026-03-09T14:20:38.397606+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:20:41.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:41 vm03 bash[17524]: cluster 2026-03-09T14:20:38.397606+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:20:41.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:41 vm03 bash[17524]: cluster 2026-03-09T14:20:38.397647+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:20:41.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:41 vm03 bash[17524]: cluster 2026-03-09T14:20:38.397647+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:20:41.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:41 vm03 bash[17524]: cluster 2026-03-09T14:20:40.158705+0000 mon.a (mon.0) 487 : cluster [INF] osd.4 [v2:192.168.123.104:6816/3814952582,v1:192.168.123.104:6817/3814952582] boot 2026-03-09T14:20:41.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:41 vm03 bash[17524]: cluster 2026-03-09T14:20:40.158705+0000 mon.a (mon.0) 487 : cluster [INF] osd.4 [v2:192.168.123.104:6816/3814952582,v1:192.168.123.104:6817/3814952582] boot 2026-03-09T14:20:41.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:41 vm03 bash[17524]: cluster 2026-03-09T14:20:40.158766+0000 mon.a (mon.0) 488 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-09T14:20:41.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:41 vm03 bash[17524]: cluster 2026-03-09T14:20:40.158766+0000 mon.a (mon.0) 488 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-09T14:20:41.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:41 vm03 bash[17524]: audit 2026-03-09T14:20:40.184567+0000 mon.a (mon.0) 489 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:20:41.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:41 vm03 bash[17524]: audit 2026-03-09T14:20:40.184567+0000 mon.a (mon.0) 489 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:20:41.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:41 vm03 bash[17524]: audit 2026-03-09T14:20:40.268469+0000 mon.a (mon.0) 490 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:41.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:41 vm03 bash[17524]: audit 2026-03-09T14:20:40.268469+0000 mon.a (mon.0) 490 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:41.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:41 vm03 bash[17524]: audit 2026-03-09T14:20:40.273331+0000 mon.a (mon.0) 491 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:41.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:41 vm03 bash[17524]: audit 2026-03-09T14:20:40.273331+0000 mon.a (mon.0) 491 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:41.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:41 vm03 bash[17524]: audit 2026-03-09T14:20:40.637187+0000 mon.a (mon.0) 492 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:41.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:41 vm03 bash[17524]: audit 2026-03-09T14:20:40.637187+0000 mon.a (mon.0) 492 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:41.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:41 vm03 bash[17524]: audit 2026-03-09T14:20:40.637665+0000 mon.a (mon.0) 493 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:20:41.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:41 vm03 bash[17524]: audit 2026-03-09T14:20:40.637665+0000 mon.a (mon.0) 493 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:20:41.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:41 vm03 bash[17524]: audit 2026-03-09T14:20:40.641895+0000 mon.a (mon.0) 494 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:41.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:41 vm03 bash[17524]: audit 2026-03-09T14:20:40.641895+0000 mon.a (mon.0) 494 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:42.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:42 vm03 bash[17524]: cluster 2026-03-09T14:20:40.930677+0000 mgr.x (mgr.14150) 172 : cluster [DBG] pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:42.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:42 vm03 bash[17524]: cluster 2026-03-09T14:20:40.930677+0000 mgr.x (mgr.14150) 172 : cluster [DBG] pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:42.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:42 vm03 bash[17524]: cluster 2026-03-09T14:20:41.278070+0000 mon.a (mon.0) 495 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-09T14:20:42.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:42 vm03 bash[17524]: cluster 2026-03-09T14:20:41.278070+0000 mon.a (mon.0) 495 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-09T14:20:42.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:42 vm03 bash[17524]: audit 2026-03-09T14:20:41.298328+0000 mon.a (mon.0) 496 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:20:42.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:42 vm03 bash[17524]: audit 2026-03-09T14:20:41.298328+0000 mon.a (mon.0) 496 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:20:42.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:42 vm03 bash[17524]: audit 2026-03-09T14:20:41.303402+0000 mon.a (mon.0) 497 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:42.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:42 vm03 bash[17524]: audit 2026-03-09T14:20:41.303402+0000 mon.a (mon.0) 497 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:42.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:42 vm03 bash[17524]: audit 2026-03-09T14:20:41.307958+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:42.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:42 vm03 bash[17524]: audit 2026-03-09T14:20:41.307958+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:42.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:42 vm04 bash[19581]: cluster 2026-03-09T14:20:40.930677+0000 mgr.x (mgr.14150) 172 : cluster [DBG] pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:42.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:42 vm04 bash[19581]: cluster 2026-03-09T14:20:40.930677+0000 mgr.x (mgr.14150) 172 : cluster [DBG] pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:42.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:42 vm04 bash[19581]: cluster 2026-03-09T14:20:41.278070+0000 mon.a (mon.0) 495 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-09T14:20:42.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:42 vm04 bash[19581]: cluster 2026-03-09T14:20:41.278070+0000 mon.a (mon.0) 495 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-09T14:20:42.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:42 vm04 bash[19581]: audit 2026-03-09T14:20:41.298328+0000 mon.a (mon.0) 496 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:20:42.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:42 vm04 bash[19581]: audit 2026-03-09T14:20:41.298328+0000 mon.a (mon.0) 496 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:20:42.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:42 vm04 bash[19581]: audit 2026-03-09T14:20:41.303402+0000 mon.a (mon.0) 497 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:42.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:42 vm04 bash[19581]: audit 2026-03-09T14:20:41.303402+0000 mon.a (mon.0) 497 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:42.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:42 vm04 bash[19581]: audit 2026-03-09T14:20:41.307958+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:42.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:42 vm04 bash[19581]: audit 2026-03-09T14:20:41.307958+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:42.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:42 vm05 bash[20070]: cluster 2026-03-09T14:20:40.930677+0000 mgr.x (mgr.14150) 172 : cluster [DBG] pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:42.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:42 vm05 bash[20070]: cluster 2026-03-09T14:20:40.930677+0000 mgr.x (mgr.14150) 172 : cluster [DBG] pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:20:42.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:42 vm05 bash[20070]: cluster 2026-03-09T14:20:41.278070+0000 mon.a (mon.0) 495 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-09T14:20:42.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:42 vm05 bash[20070]: cluster 2026-03-09T14:20:41.278070+0000 mon.a (mon.0) 495 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-09T14:20:42.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:42 vm05 bash[20070]: audit 2026-03-09T14:20:41.298328+0000 mon.a (mon.0) 496 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:20:42.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:42 vm05 bash[20070]: audit 2026-03-09T14:20:41.298328+0000 mon.a (mon.0) 496 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:20:42.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:42 vm05 bash[20070]: audit 2026-03-09T14:20:41.303402+0000 mon.a (mon.0) 497 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:42.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:42 vm05 bash[20070]: audit 2026-03-09T14:20:41.303402+0000 mon.a (mon.0) 497 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:42.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:42 vm05 bash[20070]: audit 2026-03-09T14:20:41.307958+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:42.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:42 vm05 bash[20070]: audit 2026-03-09T14:20:41.307958+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:44.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:44 vm03 bash[17524]: cluster 2026-03-09T14:20:42.930969+0000 mgr.x (mgr.14150) 173 : cluster [DBG] pgmap v134: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:44.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:44 vm03 bash[17524]: cluster 2026-03-09T14:20:42.930969+0000 mgr.x (mgr.14150) 173 : cluster [DBG] pgmap v134: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:44.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:44 vm04 bash[19581]: cluster 2026-03-09T14:20:42.930969+0000 mgr.x (mgr.14150) 173 : cluster [DBG] pgmap v134: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:44.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:44 vm04 bash[19581]: cluster 2026-03-09T14:20:42.930969+0000 mgr.x (mgr.14150) 173 : cluster [DBG] pgmap v134: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:44.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:44 vm05 bash[20070]: cluster 2026-03-09T14:20:42.930969+0000 mgr.x (mgr.14150) 173 : cluster [DBG] pgmap v134: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:44.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:44 vm05 bash[20070]: cluster 2026-03-09T14:20:42.930969+0000 mgr.x (mgr.14150) 173 : cluster [DBG] pgmap v134: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:45.000 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.c/config 2026-03-09T14:20:45.761 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:20:45.774 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph orch daemon add osd vm05:/dev/vde 2026-03-09T14:20:46.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:46 vm04 bash[19581]: cluster 2026-03-09T14:20:44.931232+0000 mgr.x (mgr.14150) 174 : cluster [DBG] pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:46.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:46 vm04 bash[19581]: cluster 2026-03-09T14:20:44.931232+0000 mgr.x (mgr.14150) 174 : cluster [DBG] pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:46.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:46 vm03 bash[17524]: cluster 2026-03-09T14:20:44.931232+0000 mgr.x (mgr.14150) 174 : cluster [DBG] pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:46.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:46 vm03 bash[17524]: cluster 2026-03-09T14:20:44.931232+0000 mgr.x (mgr.14150) 174 : cluster [DBG] pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:46.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:46 vm05 bash[20070]: cluster 2026-03-09T14:20:44.931232+0000 mgr.x (mgr.14150) 174 : cluster [DBG] pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:46.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:46 vm05 bash[20070]: cluster 2026-03-09T14:20:44.931232+0000 mgr.x (mgr.14150) 174 : cluster [DBG] pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:48.256 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:47 vm04 bash[19581]: cephadm 2026-03-09T14:20:46.863114+0000 mgr.x (mgr.14150) 175 : cephadm [INF] Detected new or changed devices on vm04 2026-03-09T14:20:48.256 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:47 vm04 bash[19581]: cephadm 2026-03-09T14:20:46.863114+0000 mgr.x (mgr.14150) 175 : cephadm [INF] Detected new or changed devices on vm04 2026-03-09T14:20:48.256 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:47 vm04 bash[19581]: audit 2026-03-09T14:20:46.868412+0000 mon.a (mon.0) 499 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:48.256 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:47 vm04 bash[19581]: audit 2026-03-09T14:20:46.868412+0000 mon.a (mon.0) 499 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:48.256 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:47 vm04 bash[19581]: audit 2026-03-09T14:20:46.871758+0000 mon.a (mon.0) 500 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:48.256 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:47 vm04 bash[19581]: audit 2026-03-09T14:20:46.871758+0000 mon.a (mon.0) 500 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:48.256 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:47 vm04 bash[19581]: audit 2026-03-09T14:20:46.872533+0000 mon.a (mon.0) 501 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:20:48.256 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:47 vm04 bash[19581]: audit 2026-03-09T14:20:46.872533+0000 mon.a (mon.0) 501 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:20:48.256 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:47 vm04 bash[19581]: audit 2026-03-09T14:20:46.873065+0000 mon.a (mon.0) 502 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:20:48.256 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:47 vm04 bash[19581]: audit 2026-03-09T14:20:46.873065+0000 mon.a (mon.0) 502 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:20:48.256 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:47 vm04 bash[19581]: audit 2026-03-09T14:20:46.873448+0000 mon.a (mon.0) 503 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:20:48.256 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:47 vm04 bash[19581]: audit 2026-03-09T14:20:46.873448+0000 mon.a (mon.0) 503 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:20:48.257 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:47 vm04 bash[19581]: cephadm 2026-03-09T14:20:46.873764+0000 mgr.x (mgr.14150) 176 : cephadm [INF] Adjusting osd_memory_target on vm04 to 1517M 2026-03-09T14:20:48.257 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:47 vm04 bash[19581]: cephadm 2026-03-09T14:20:46.873764+0000 mgr.x (mgr.14150) 176 : cephadm [INF] Adjusting osd_memory_target on vm04 to 1517M 2026-03-09T14:20:48.257 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:47 vm04 bash[19581]: audit 2026-03-09T14:20:46.875901+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:48.257 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:47 vm04 bash[19581]: audit 2026-03-09T14:20:46.875901+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:48.257 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:47 vm04 bash[19581]: audit 2026-03-09T14:20:46.877068+0000 mon.a (mon.0) 505 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:48.257 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:47 vm04 bash[19581]: audit 2026-03-09T14:20:46.877068+0000 mon.a (mon.0) 505 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:48.257 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:47 vm04 bash[19581]: audit 2026-03-09T14:20:46.877522+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:20:48.257 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:47 vm04 bash[19581]: audit 2026-03-09T14:20:46.877522+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:20:48.257 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:47 vm04 bash[19581]: audit 2026-03-09T14:20:46.880675+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:48.257 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:47 vm04 bash[19581]: audit 2026-03-09T14:20:46.880675+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:48.257 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:47 vm04 bash[19581]: cluster 2026-03-09T14:20:46.931523+0000 mgr.x (mgr.14150) 177 : cluster [DBG] pgmap v136: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:48.257 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:47 vm04 bash[19581]: cluster 2026-03-09T14:20:46.931523+0000 mgr.x (mgr.14150) 177 : cluster [DBG] pgmap v136: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:48.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:47 vm05 bash[20070]: cephadm 2026-03-09T14:20:46.863114+0000 mgr.x (mgr.14150) 175 : cephadm [INF] Detected new or changed devices on vm04 2026-03-09T14:20:48.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:47 vm05 bash[20070]: cephadm 2026-03-09T14:20:46.863114+0000 mgr.x (mgr.14150) 175 : cephadm [INF] Detected new or changed devices on vm04 2026-03-09T14:20:48.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:47 vm05 bash[20070]: audit 2026-03-09T14:20:46.868412+0000 mon.a (mon.0) 499 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:48.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:47 vm05 bash[20070]: audit 2026-03-09T14:20:46.868412+0000 mon.a (mon.0) 499 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:48.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:47 vm05 bash[20070]: audit 2026-03-09T14:20:46.871758+0000 mon.a (mon.0) 500 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:48.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:47 vm05 bash[20070]: audit 2026-03-09T14:20:46.871758+0000 mon.a (mon.0) 500 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:48.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:47 vm05 bash[20070]: audit 2026-03-09T14:20:46.872533+0000 mon.a (mon.0) 501 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:20:48.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:47 vm05 bash[20070]: audit 2026-03-09T14:20:46.872533+0000 mon.a (mon.0) 501 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:20:48.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:47 vm05 bash[20070]: audit 2026-03-09T14:20:46.873065+0000 mon.a (mon.0) 502 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:20:48.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:47 vm05 bash[20070]: audit 2026-03-09T14:20:46.873065+0000 mon.a (mon.0) 502 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:20:48.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:47 vm05 bash[20070]: audit 2026-03-09T14:20:46.873448+0000 mon.a (mon.0) 503 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:20:48.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:47 vm05 bash[20070]: audit 2026-03-09T14:20:46.873448+0000 mon.a (mon.0) 503 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:20:48.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:47 vm05 bash[20070]: cephadm 2026-03-09T14:20:46.873764+0000 mgr.x (mgr.14150) 176 : cephadm [INF] Adjusting osd_memory_target on vm04 to 1517M 2026-03-09T14:20:48.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:47 vm05 bash[20070]: cephadm 2026-03-09T14:20:46.873764+0000 mgr.x (mgr.14150) 176 : cephadm [INF] Adjusting osd_memory_target on vm04 to 1517M 2026-03-09T14:20:48.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:47 vm05 bash[20070]: audit 2026-03-09T14:20:46.875901+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:48.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:47 vm05 bash[20070]: audit 2026-03-09T14:20:46.875901+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:48.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:47 vm05 bash[20070]: audit 2026-03-09T14:20:46.877068+0000 mon.a (mon.0) 505 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:48.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:47 vm05 bash[20070]: audit 2026-03-09T14:20:46.877068+0000 mon.a (mon.0) 505 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:48.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:47 vm05 bash[20070]: audit 2026-03-09T14:20:46.877522+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:20:48.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:47 vm05 bash[20070]: audit 2026-03-09T14:20:46.877522+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:20:48.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:47 vm05 bash[20070]: audit 2026-03-09T14:20:46.880675+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:48.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:47 vm05 bash[20070]: audit 2026-03-09T14:20:46.880675+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:48.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:47 vm05 bash[20070]: cluster 2026-03-09T14:20:46.931523+0000 mgr.x (mgr.14150) 177 : cluster [DBG] pgmap v136: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:48.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:47 vm05 bash[20070]: cluster 2026-03-09T14:20:46.931523+0000 mgr.x (mgr.14150) 177 : cluster [DBG] pgmap v136: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:48.301 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:47 vm03 bash[17524]: cephadm 2026-03-09T14:20:46.863114+0000 mgr.x (mgr.14150) 175 : cephadm [INF] Detected new or changed devices on vm04 2026-03-09T14:20:48.301 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:47 vm03 bash[17524]: cephadm 2026-03-09T14:20:46.863114+0000 mgr.x (mgr.14150) 175 : cephadm [INF] Detected new or changed devices on vm04 2026-03-09T14:20:48.301 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:47 vm03 bash[17524]: audit 2026-03-09T14:20:46.868412+0000 mon.a (mon.0) 499 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:48.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:47 vm03 bash[17524]: audit 2026-03-09T14:20:46.868412+0000 mon.a (mon.0) 499 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:48.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:47 vm03 bash[17524]: audit 2026-03-09T14:20:46.871758+0000 mon.a (mon.0) 500 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:48.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:47 vm03 bash[17524]: audit 2026-03-09T14:20:46.871758+0000 mon.a (mon.0) 500 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:48.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:47 vm03 bash[17524]: audit 2026-03-09T14:20:46.872533+0000 mon.a (mon.0) 501 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:20:48.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:47 vm03 bash[17524]: audit 2026-03-09T14:20:46.872533+0000 mon.a (mon.0) 501 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:20:48.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:47 vm03 bash[17524]: audit 2026-03-09T14:20:46.873065+0000 mon.a (mon.0) 502 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:20:48.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:47 vm03 bash[17524]: audit 2026-03-09T14:20:46.873065+0000 mon.a (mon.0) 502 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:20:48.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:47 vm03 bash[17524]: audit 2026-03-09T14:20:46.873448+0000 mon.a (mon.0) 503 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:20:48.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:47 vm03 bash[17524]: audit 2026-03-09T14:20:46.873448+0000 mon.a (mon.0) 503 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:20:48.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:47 vm03 bash[17524]: cephadm 2026-03-09T14:20:46.873764+0000 mgr.x (mgr.14150) 176 : cephadm [INF] Adjusting osd_memory_target on vm04 to 1517M 2026-03-09T14:20:48.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:47 vm03 bash[17524]: cephadm 2026-03-09T14:20:46.873764+0000 mgr.x (mgr.14150) 176 : cephadm [INF] Adjusting osd_memory_target on vm04 to 1517M 2026-03-09T14:20:48.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:47 vm03 bash[17524]: audit 2026-03-09T14:20:46.875901+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:48.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:47 vm03 bash[17524]: audit 2026-03-09T14:20:46.875901+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:48.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:47 vm03 bash[17524]: audit 2026-03-09T14:20:46.877068+0000 mon.a (mon.0) 505 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:48.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:47 vm03 bash[17524]: audit 2026-03-09T14:20:46.877068+0000 mon.a (mon.0) 505 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:48.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:47 vm03 bash[17524]: audit 2026-03-09T14:20:46.877522+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:20:48.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:47 vm03 bash[17524]: audit 2026-03-09T14:20:46.877522+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:20:48.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:47 vm03 bash[17524]: audit 2026-03-09T14:20:46.880675+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:48.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:47 vm03 bash[17524]: audit 2026-03-09T14:20:46.880675+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:20:48.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:47 vm03 bash[17524]: cluster 2026-03-09T14:20:46.931523+0000 mgr.x (mgr.14150) 177 : cluster [DBG] pgmap v136: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:48.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:47 vm03 bash[17524]: cluster 2026-03-09T14:20:46.931523+0000 mgr.x (mgr.14150) 177 : cluster [DBG] pgmap v136: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:49.380 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.c/config 2026-03-09T14:20:50.256 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:49 vm04 bash[19581]: cluster 2026-03-09T14:20:48.931822+0000 mgr.x (mgr.14150) 178 : cluster [DBG] pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:50.256 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:49 vm04 bash[19581]: cluster 2026-03-09T14:20:48.931822+0000 mgr.x (mgr.14150) 178 : cluster [DBG] pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:50.256 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:49 vm04 bash[19581]: audit 2026-03-09T14:20:49.625722+0000 mon.a (mon.0) 508 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:20:50.256 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:49 vm04 bash[19581]: audit 2026-03-09T14:20:49.625722+0000 mon.a (mon.0) 508 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:20:50.256 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:49 vm04 bash[19581]: audit 2026-03-09T14:20:49.627004+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:20:50.256 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:49 vm04 bash[19581]: audit 2026-03-09T14:20:49.627004+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:20:50.256 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:49 vm04 bash[19581]: audit 2026-03-09T14:20:49.627399+0000 mon.a (mon.0) 510 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:50.256 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:49 vm04 bash[19581]: audit 2026-03-09T14:20:49.627399+0000 mon.a (mon.0) 510 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:50.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:49 vm05 bash[20070]: cluster 2026-03-09T14:20:48.931822+0000 mgr.x (mgr.14150) 178 : cluster [DBG] pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:50.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:49 vm05 bash[20070]: cluster 2026-03-09T14:20:48.931822+0000 mgr.x (mgr.14150) 178 : cluster [DBG] pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:50.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:49 vm05 bash[20070]: audit 2026-03-09T14:20:49.625722+0000 mon.a (mon.0) 508 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:20:50.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:49 vm05 bash[20070]: audit 2026-03-09T14:20:49.625722+0000 mon.a (mon.0) 508 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:20:50.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:49 vm05 bash[20070]: audit 2026-03-09T14:20:49.627004+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:20:50.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:49 vm05 bash[20070]: audit 2026-03-09T14:20:49.627004+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:20:50.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:49 vm05 bash[20070]: audit 2026-03-09T14:20:49.627399+0000 mon.a (mon.0) 510 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:50.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:49 vm05 bash[20070]: audit 2026-03-09T14:20:49.627399+0000 mon.a (mon.0) 510 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:50.301 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:49 vm03 bash[17524]: cluster 2026-03-09T14:20:48.931822+0000 mgr.x (mgr.14150) 178 : cluster [DBG] pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:50.301 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:49 vm03 bash[17524]: cluster 2026-03-09T14:20:48.931822+0000 mgr.x (mgr.14150) 178 : cluster [DBG] pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:50.301 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:49 vm03 bash[17524]: audit 2026-03-09T14:20:49.625722+0000 mon.a (mon.0) 508 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:20:50.301 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:49 vm03 bash[17524]: audit 2026-03-09T14:20:49.625722+0000 mon.a (mon.0) 508 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:20:50.301 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:49 vm03 bash[17524]: audit 2026-03-09T14:20:49.627004+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:20:50.301 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:49 vm03 bash[17524]: audit 2026-03-09T14:20:49.627004+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:20:50.301 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:49 vm03 bash[17524]: audit 2026-03-09T14:20:49.627399+0000 mon.a (mon.0) 510 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:50.301 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:49 vm03 bash[17524]: audit 2026-03-09T14:20:49.627399+0000 mon.a (mon.0) 510 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:20:51.256 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:50 vm04 bash[19581]: audit 2026-03-09T14:20:49.624168+0000 mgr.x (mgr.14150) 179 : audit [DBG] from='client.14328 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:20:51.256 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:50 vm04 bash[19581]: audit 2026-03-09T14:20:49.624168+0000 mgr.x (mgr.14150) 179 : audit [DBG] from='client.14328 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:20:51.256 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:50 vm05 bash[20070]: audit 2026-03-09T14:20:49.624168+0000 mgr.x (mgr.14150) 179 : audit [DBG] from='client.14328 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:20:51.256 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:50 vm05 bash[20070]: audit 2026-03-09T14:20:49.624168+0000 mgr.x (mgr.14150) 179 : audit [DBG] from='client.14328 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:20:51.301 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:50 vm03 bash[17524]: audit 2026-03-09T14:20:49.624168+0000 mgr.x (mgr.14150) 179 : audit [DBG] from='client.14328 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:20:51.301 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:50 vm03 bash[17524]: audit 2026-03-09T14:20:49.624168+0000 mgr.x (mgr.14150) 179 : audit [DBG] from='client.14328 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:20:52.256 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:51 vm04 bash[19581]: cluster 2026-03-09T14:20:50.932108+0000 mgr.x (mgr.14150) 180 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:52.256 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:51 vm04 bash[19581]: cluster 2026-03-09T14:20:50.932108+0000 mgr.x (mgr.14150) 180 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:52.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:51 vm05 bash[20070]: cluster 2026-03-09T14:20:50.932108+0000 mgr.x (mgr.14150) 180 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:52.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:51 vm05 bash[20070]: cluster 2026-03-09T14:20:50.932108+0000 mgr.x (mgr.14150) 180 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:52.301 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:51 vm03 bash[17524]: cluster 2026-03-09T14:20:50.932108+0000 mgr.x (mgr.14150) 180 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:52.301 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:51 vm03 bash[17524]: cluster 2026-03-09T14:20:50.932108+0000 mgr.x (mgr.14150) 180 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:54.256 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:54 vm04 bash[19581]: cluster 2026-03-09T14:20:52.932361+0000 mgr.x (mgr.14150) 181 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:54.256 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:54 vm04 bash[19581]: cluster 2026-03-09T14:20:52.932361+0000 mgr.x (mgr.14150) 181 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:54.256 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:54 vm04 bash[19581]: audit 2026-03-09T14:20:53.939580+0000 mon.c (mon.1) 10 : audit [INF] from='client.? 192.168.123.105:0/1883690095' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "628905a2-37b8-4495-89ad-022957204832"}]: dispatch 2026-03-09T14:20:54.256 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:54 vm04 bash[19581]: audit 2026-03-09T14:20:53.939580+0000 mon.c (mon.1) 10 : audit [INF] from='client.? 192.168.123.105:0/1883690095' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "628905a2-37b8-4495-89ad-022957204832"}]: dispatch 2026-03-09T14:20:54.256 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:54 vm04 bash[19581]: audit 2026-03-09T14:20:53.940371+0000 mon.a (mon.0) 511 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "628905a2-37b8-4495-89ad-022957204832"}]: dispatch 2026-03-09T14:20:54.256 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:54 vm04 bash[19581]: audit 2026-03-09T14:20:53.940371+0000 mon.a (mon.0) 511 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "628905a2-37b8-4495-89ad-022957204832"}]: dispatch 2026-03-09T14:20:54.257 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:54 vm04 bash[19581]: audit 2026-03-09T14:20:53.943194+0000 mon.a (mon.0) 512 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "628905a2-37b8-4495-89ad-022957204832"}]': finished 2026-03-09T14:20:54.257 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:54 vm04 bash[19581]: audit 2026-03-09T14:20:53.943194+0000 mon.a (mon.0) 512 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "628905a2-37b8-4495-89ad-022957204832"}]': finished 2026-03-09T14:20:54.257 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:54 vm04 bash[19581]: cluster 2026-03-09T14:20:53.945983+0000 mon.a (mon.0) 513 : cluster [DBG] osdmap e33: 6 total, 5 up, 6 in 2026-03-09T14:20:54.257 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:54 vm04 bash[19581]: cluster 2026-03-09T14:20:53.945983+0000 mon.a (mon.0) 513 : cluster [DBG] osdmap e33: 6 total, 5 up, 6 in 2026-03-09T14:20:54.257 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:54 vm04 bash[19581]: audit 2026-03-09T14:20:53.946166+0000 mon.a (mon.0) 514 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:20:54.257 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:54 vm04 bash[19581]: audit 2026-03-09T14:20:53.946166+0000 mon.a (mon.0) 514 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:20:54.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:54 vm05 bash[20070]: cluster 2026-03-09T14:20:52.932361+0000 mgr.x (mgr.14150) 181 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:54.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:54 vm05 bash[20070]: cluster 2026-03-09T14:20:52.932361+0000 mgr.x (mgr.14150) 181 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:54.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:54 vm05 bash[20070]: audit 2026-03-09T14:20:53.939580+0000 mon.c (mon.1) 10 : audit [INF] from='client.? 192.168.123.105:0/1883690095' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "628905a2-37b8-4495-89ad-022957204832"}]: dispatch 2026-03-09T14:20:54.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:54 vm05 bash[20070]: audit 2026-03-09T14:20:53.939580+0000 mon.c (mon.1) 10 : audit [INF] from='client.? 192.168.123.105:0/1883690095' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "628905a2-37b8-4495-89ad-022957204832"}]: dispatch 2026-03-09T14:20:54.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:54 vm05 bash[20070]: audit 2026-03-09T14:20:53.940371+0000 mon.a (mon.0) 511 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "628905a2-37b8-4495-89ad-022957204832"}]: dispatch 2026-03-09T14:20:54.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:54 vm05 bash[20070]: audit 2026-03-09T14:20:53.940371+0000 mon.a (mon.0) 511 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "628905a2-37b8-4495-89ad-022957204832"}]: dispatch 2026-03-09T14:20:54.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:54 vm05 bash[20070]: audit 2026-03-09T14:20:53.943194+0000 mon.a (mon.0) 512 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "628905a2-37b8-4495-89ad-022957204832"}]': finished 2026-03-09T14:20:54.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:54 vm05 bash[20070]: audit 2026-03-09T14:20:53.943194+0000 mon.a (mon.0) 512 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "628905a2-37b8-4495-89ad-022957204832"}]': finished 2026-03-09T14:20:54.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:54 vm05 bash[20070]: cluster 2026-03-09T14:20:53.945983+0000 mon.a (mon.0) 513 : cluster [DBG] osdmap e33: 6 total, 5 up, 6 in 2026-03-09T14:20:54.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:54 vm05 bash[20070]: cluster 2026-03-09T14:20:53.945983+0000 mon.a (mon.0) 513 : cluster [DBG] osdmap e33: 6 total, 5 up, 6 in 2026-03-09T14:20:54.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:54 vm05 bash[20070]: audit 2026-03-09T14:20:53.946166+0000 mon.a (mon.0) 514 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:20:54.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:54 vm05 bash[20070]: audit 2026-03-09T14:20:53.946166+0000 mon.a (mon.0) 514 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:20:54.301 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:54 vm03 bash[17524]: cluster 2026-03-09T14:20:52.932361+0000 mgr.x (mgr.14150) 181 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:54.301 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:54 vm03 bash[17524]: cluster 2026-03-09T14:20:52.932361+0000 mgr.x (mgr.14150) 181 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:54.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:54 vm03 bash[17524]: audit 2026-03-09T14:20:53.939580+0000 mon.c (mon.1) 10 : audit [INF] from='client.? 192.168.123.105:0/1883690095' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "628905a2-37b8-4495-89ad-022957204832"}]: dispatch 2026-03-09T14:20:54.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:54 vm03 bash[17524]: audit 2026-03-09T14:20:53.939580+0000 mon.c (mon.1) 10 : audit [INF] from='client.? 192.168.123.105:0/1883690095' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "628905a2-37b8-4495-89ad-022957204832"}]: dispatch 2026-03-09T14:20:54.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:54 vm03 bash[17524]: audit 2026-03-09T14:20:53.940371+0000 mon.a (mon.0) 511 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "628905a2-37b8-4495-89ad-022957204832"}]: dispatch 2026-03-09T14:20:54.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:54 vm03 bash[17524]: audit 2026-03-09T14:20:53.940371+0000 mon.a (mon.0) 511 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "628905a2-37b8-4495-89ad-022957204832"}]: dispatch 2026-03-09T14:20:54.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:54 vm03 bash[17524]: audit 2026-03-09T14:20:53.943194+0000 mon.a (mon.0) 512 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "628905a2-37b8-4495-89ad-022957204832"}]': finished 2026-03-09T14:20:54.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:54 vm03 bash[17524]: audit 2026-03-09T14:20:53.943194+0000 mon.a (mon.0) 512 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "628905a2-37b8-4495-89ad-022957204832"}]': finished 2026-03-09T14:20:54.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:54 vm03 bash[17524]: cluster 2026-03-09T14:20:53.945983+0000 mon.a (mon.0) 513 : cluster [DBG] osdmap e33: 6 total, 5 up, 6 in 2026-03-09T14:20:54.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:54 vm03 bash[17524]: cluster 2026-03-09T14:20:53.945983+0000 mon.a (mon.0) 513 : cluster [DBG] osdmap e33: 6 total, 5 up, 6 in 2026-03-09T14:20:54.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:54 vm03 bash[17524]: audit 2026-03-09T14:20:53.946166+0000 mon.a (mon.0) 514 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:20:54.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:54 vm03 bash[17524]: audit 2026-03-09T14:20:53.946166+0000 mon.a (mon.0) 514 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:20:55.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:55 vm05 bash[20070]: audit 2026-03-09T14:20:54.506495+0000 mon.c (mon.1) 11 : audit [DBG] from='client.? 192.168.123.105:0/3809130714' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:20:55.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:55 vm05 bash[20070]: audit 2026-03-09T14:20:54.506495+0000 mon.c (mon.1) 11 : audit [DBG] from='client.? 192.168.123.105:0/3809130714' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:20:55.301 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:55 vm03 bash[17524]: audit 2026-03-09T14:20:54.506495+0000 mon.c (mon.1) 11 : audit [DBG] from='client.? 192.168.123.105:0/3809130714' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:20:55.301 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:55 vm03 bash[17524]: audit 2026-03-09T14:20:54.506495+0000 mon.c (mon.1) 11 : audit [DBG] from='client.? 192.168.123.105:0/3809130714' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:20:55.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:55 vm04 bash[19581]: audit 2026-03-09T14:20:54.506495+0000 mon.c (mon.1) 11 : audit [DBG] from='client.? 192.168.123.105:0/3809130714' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:20:55.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:55 vm04 bash[19581]: audit 2026-03-09T14:20:54.506495+0000 mon.c (mon.1) 11 : audit [DBG] from='client.? 192.168.123.105:0/3809130714' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:20:56.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:56 vm03 bash[17524]: cluster 2026-03-09T14:20:54.932650+0000 mgr.x (mgr.14150) 182 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:56.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:56 vm03 bash[17524]: cluster 2026-03-09T14:20:54.932650+0000 mgr.x (mgr.14150) 182 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:56.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:56 vm04 bash[19581]: cluster 2026-03-09T14:20:54.932650+0000 mgr.x (mgr.14150) 182 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:56.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:56 vm04 bash[19581]: cluster 2026-03-09T14:20:54.932650+0000 mgr.x (mgr.14150) 182 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:56.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:56 vm05 bash[20070]: cluster 2026-03-09T14:20:54.932650+0000 mgr.x (mgr.14150) 182 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:56.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:56 vm05 bash[20070]: cluster 2026-03-09T14:20:54.932650+0000 mgr.x (mgr.14150) 182 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:58.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:58 vm03 bash[17524]: cluster 2026-03-09T14:20:56.932909+0000 mgr.x (mgr.14150) 183 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:58.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:20:58 vm03 bash[17524]: cluster 2026-03-09T14:20:56.932909+0000 mgr.x (mgr.14150) 183 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:58.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:58 vm04 bash[19581]: cluster 2026-03-09T14:20:56.932909+0000 mgr.x (mgr.14150) 183 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:58.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:20:58 vm04 bash[19581]: cluster 2026-03-09T14:20:56.932909+0000 mgr.x (mgr.14150) 183 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:58.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:58 vm05 bash[20070]: cluster 2026-03-09T14:20:56.932909+0000 mgr.x (mgr.14150) 183 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:20:58.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:20:58 vm05 bash[20070]: cluster 2026-03-09T14:20:56.932909+0000 mgr.x (mgr.14150) 183 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:00.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:00 vm03 bash[17524]: cluster 2026-03-09T14:20:58.933116+0000 mgr.x (mgr.14150) 184 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:00.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:00 vm03 bash[17524]: cluster 2026-03-09T14:20:58.933116+0000 mgr.x (mgr.14150) 184 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:00.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:00 vm04 bash[19581]: cluster 2026-03-09T14:20:58.933116+0000 mgr.x (mgr.14150) 184 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:00.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:00 vm04 bash[19581]: cluster 2026-03-09T14:20:58.933116+0000 mgr.x (mgr.14150) 184 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:00.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:00 vm05 bash[20070]: cluster 2026-03-09T14:20:58.933116+0000 mgr.x (mgr.14150) 184 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:00.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:00 vm05 bash[20070]: cluster 2026-03-09T14:20:58.933116+0000 mgr.x (mgr.14150) 184 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:02.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:02 vm03 bash[17524]: cluster 2026-03-09T14:21:00.933357+0000 mgr.x (mgr.14150) 185 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:02.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:02 vm03 bash[17524]: cluster 2026-03-09T14:21:00.933357+0000 mgr.x (mgr.14150) 185 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:02.503 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:02 vm05 bash[20070]: cluster 2026-03-09T14:21:00.933357+0000 mgr.x (mgr.14150) 185 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:02.503 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:02 vm05 bash[20070]: cluster 2026-03-09T14:21:00.933357+0000 mgr.x (mgr.14150) 185 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:02.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:02 vm04 bash[19581]: cluster 2026-03-09T14:21:00.933357+0000 mgr.x (mgr.14150) 185 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:02.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:02 vm04 bash[19581]: cluster 2026-03-09T14:21:00.933357+0000 mgr.x (mgr.14150) 185 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:03.153 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:03 vm05 bash[20070]: audit 2026-03-09T14:21:02.914867+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T14:21:03.153 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:03 vm05 bash[20070]: audit 2026-03-09T14:21:02.914867+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T14:21:03.153 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:03 vm05 bash[20070]: audit 2026-03-09T14:21:02.915418+0000 mon.a (mon.0) 516 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:03.153 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:03 vm05 bash[20070]: audit 2026-03-09T14:21:02.915418+0000 mon.a (mon.0) 516 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:03.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:03 vm03 bash[17524]: audit 2026-03-09T14:21:02.914867+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T14:21:03.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:03 vm03 bash[17524]: audit 2026-03-09T14:21:02.914867+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T14:21:03.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:03 vm03 bash[17524]: audit 2026-03-09T14:21:02.915418+0000 mon.a (mon.0) 516 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:03.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:03 vm03 bash[17524]: audit 2026-03-09T14:21:02.915418+0000 mon.a (mon.0) 516 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:03.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:03 vm04 bash[19581]: audit 2026-03-09T14:21:02.914867+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T14:21:03.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:03 vm04 bash[19581]: audit 2026-03-09T14:21:02.914867+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T14:21:03.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:03 vm04 bash[19581]: audit 2026-03-09T14:21:02.915418+0000 mon.a (mon.0) 516 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:03.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:03 vm04 bash[19581]: audit 2026-03-09T14:21:02.915418+0000 mon.a (mon.0) 516 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:03.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:03 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:21:04.121 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:03 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:21:04.121 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:04 vm05 bash[20070]: cephadm 2026-03-09T14:21:02.915904+0000 mgr.x (mgr.14150) 186 : cephadm [INF] Deploying daemon osd.5 on vm05 2026-03-09T14:21:04.121 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:04 vm05 bash[20070]: cephadm 2026-03-09T14:21:02.915904+0000 mgr.x (mgr.14150) 186 : cephadm [INF] Deploying daemon osd.5 on vm05 2026-03-09T14:21:04.121 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:04 vm05 bash[20070]: cluster 2026-03-09T14:21:02.933568+0000 mgr.x (mgr.14150) 187 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:04.121 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:04 vm05 bash[20070]: cluster 2026-03-09T14:21:02.933568+0000 mgr.x (mgr.14150) 187 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:04.121 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:04 vm05 bash[20070]: audit 2026-03-09T14:21:03.885962+0000 mon.a (mon.0) 517 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:21:04.121 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:04 vm05 bash[20070]: audit 2026-03-09T14:21:03.885962+0000 mon.a (mon.0) 517 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:21:04.121 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:04 vm05 bash[20070]: audit 2026-03-09T14:21:03.890808+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:04.121 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:04 vm05 bash[20070]: audit 2026-03-09T14:21:03.890808+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:04.121 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:04 vm05 bash[20070]: audit 2026-03-09T14:21:03.894748+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:04.121 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:04 vm05 bash[20070]: audit 2026-03-09T14:21:03.894748+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:04.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:04 vm03 bash[17524]: cephadm 2026-03-09T14:21:02.915904+0000 mgr.x (mgr.14150) 186 : cephadm [INF] Deploying daemon osd.5 on vm05 2026-03-09T14:21:04.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:04 vm03 bash[17524]: cephadm 2026-03-09T14:21:02.915904+0000 mgr.x (mgr.14150) 186 : cephadm [INF] Deploying daemon osd.5 on vm05 2026-03-09T14:21:04.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:04 vm03 bash[17524]: cluster 2026-03-09T14:21:02.933568+0000 mgr.x (mgr.14150) 187 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:04.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:04 vm03 bash[17524]: cluster 2026-03-09T14:21:02.933568+0000 mgr.x (mgr.14150) 187 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:04.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:04 vm03 bash[17524]: audit 2026-03-09T14:21:03.885962+0000 mon.a (mon.0) 517 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:21:04.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:04 vm03 bash[17524]: audit 2026-03-09T14:21:03.885962+0000 mon.a (mon.0) 517 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:21:04.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:04 vm03 bash[17524]: audit 2026-03-09T14:21:03.890808+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:04.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:04 vm03 bash[17524]: audit 2026-03-09T14:21:03.890808+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:04.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:04 vm03 bash[17524]: audit 2026-03-09T14:21:03.894748+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:04.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:04 vm03 bash[17524]: audit 2026-03-09T14:21:03.894748+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:04.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:04 vm04 bash[19581]: cephadm 2026-03-09T14:21:02.915904+0000 mgr.x (mgr.14150) 186 : cephadm [INF] Deploying daemon osd.5 on vm05 2026-03-09T14:21:04.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:04 vm04 bash[19581]: cephadm 2026-03-09T14:21:02.915904+0000 mgr.x (mgr.14150) 186 : cephadm [INF] Deploying daemon osd.5 on vm05 2026-03-09T14:21:04.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:04 vm04 bash[19581]: cluster 2026-03-09T14:21:02.933568+0000 mgr.x (mgr.14150) 187 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:04.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:04 vm04 bash[19581]: cluster 2026-03-09T14:21:02.933568+0000 mgr.x (mgr.14150) 187 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:04.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:04 vm04 bash[19581]: audit 2026-03-09T14:21:03.885962+0000 mon.a (mon.0) 517 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:21:04.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:04 vm04 bash[19581]: audit 2026-03-09T14:21:03.885962+0000 mon.a (mon.0) 517 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:21:04.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:04 vm04 bash[19581]: audit 2026-03-09T14:21:03.890808+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:04.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:04 vm04 bash[19581]: audit 2026-03-09T14:21:03.890808+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:04.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:04 vm04 bash[19581]: audit 2026-03-09T14:21:03.894748+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:04.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:04 vm04 bash[19581]: audit 2026-03-09T14:21:03.894748+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:06.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:06 vm03 bash[17524]: cluster 2026-03-09T14:21:04.933775+0000 mgr.x (mgr.14150) 188 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:06.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:06 vm03 bash[17524]: cluster 2026-03-09T14:21:04.933775+0000 mgr.x (mgr.14150) 188 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:06.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:06 vm04 bash[19581]: cluster 2026-03-09T14:21:04.933775+0000 mgr.x (mgr.14150) 188 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:06.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:06 vm04 bash[19581]: cluster 2026-03-09T14:21:04.933775+0000 mgr.x (mgr.14150) 188 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:06.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:06 vm05 bash[20070]: cluster 2026-03-09T14:21:04.933775+0000 mgr.x (mgr.14150) 188 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:06.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:06 vm05 bash[20070]: cluster 2026-03-09T14:21:04.933775+0000 mgr.x (mgr.14150) 188 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:08.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:08 vm03 bash[17524]: cluster 2026-03-09T14:21:06.934004+0000 mgr.x (mgr.14150) 189 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:08.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:08 vm03 bash[17524]: cluster 2026-03-09T14:21:06.934004+0000 mgr.x (mgr.14150) 189 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:08.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:08 vm03 bash[17524]: audit 2026-03-09T14:21:07.096862+0000 mon.a (mon.0) 520 : audit [INF] from='osd.5 [v2:192.168.123.105:6800/67591369,v1:192.168.123.105:6801/67591369]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T14:21:08.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:08 vm03 bash[17524]: audit 2026-03-09T14:21:07.096862+0000 mon.a (mon.0) 520 : audit [INF] from='osd.5 [v2:192.168.123.105:6800/67591369,v1:192.168.123.105:6801/67591369]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T14:21:08.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:08 vm04 bash[19581]: cluster 2026-03-09T14:21:06.934004+0000 mgr.x (mgr.14150) 189 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:08.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:08 vm04 bash[19581]: cluster 2026-03-09T14:21:06.934004+0000 mgr.x (mgr.14150) 189 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:08.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:08 vm04 bash[19581]: audit 2026-03-09T14:21:07.096862+0000 mon.a (mon.0) 520 : audit [INF] from='osd.5 [v2:192.168.123.105:6800/67591369,v1:192.168.123.105:6801/67591369]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T14:21:08.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:08 vm04 bash[19581]: audit 2026-03-09T14:21:07.096862+0000 mon.a (mon.0) 520 : audit [INF] from='osd.5 [v2:192.168.123.105:6800/67591369,v1:192.168.123.105:6801/67591369]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T14:21:08.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:08 vm05 bash[20070]: cluster 2026-03-09T14:21:06.934004+0000 mgr.x (mgr.14150) 189 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:08.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:08 vm05 bash[20070]: cluster 2026-03-09T14:21:06.934004+0000 mgr.x (mgr.14150) 189 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:08.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:08 vm05 bash[20070]: audit 2026-03-09T14:21:07.096862+0000 mon.a (mon.0) 520 : audit [INF] from='osd.5 [v2:192.168.123.105:6800/67591369,v1:192.168.123.105:6801/67591369]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T14:21:08.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:08 vm05 bash[20070]: audit 2026-03-09T14:21:07.096862+0000 mon.a (mon.0) 520 : audit [INF] from='osd.5 [v2:192.168.123.105:6800/67591369,v1:192.168.123.105:6801/67591369]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T14:21:09.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:09 vm04 bash[19581]: audit 2026-03-09T14:21:08.055919+0000 mon.a (mon.0) 521 : audit [INF] from='osd.5 [v2:192.168.123.105:6800/67591369,v1:192.168.123.105:6801/67591369]' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T14:21:09.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:09 vm04 bash[19581]: audit 2026-03-09T14:21:08.055919+0000 mon.a (mon.0) 521 : audit [INF] from='osd.5 [v2:192.168.123.105:6800/67591369,v1:192.168.123.105:6801/67591369]' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T14:21:09.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:09 vm04 bash[19581]: cluster 2026-03-09T14:21:08.057482+0000 mon.a (mon.0) 522 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-09T14:21:09.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:09 vm04 bash[19581]: cluster 2026-03-09T14:21:08.057482+0000 mon.a (mon.0) 522 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-09T14:21:09.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:09 vm04 bash[19581]: audit 2026-03-09T14:21:08.057646+0000 mon.a (mon.0) 523 : audit [INF] from='osd.5 [v2:192.168.123.105:6800/67591369,v1:192.168.123.105:6801/67591369]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T14:21:09.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:09 vm04 bash[19581]: audit 2026-03-09T14:21:08.057646+0000 mon.a (mon.0) 523 : audit [INF] from='osd.5 [v2:192.168.123.105:6800/67591369,v1:192.168.123.105:6801/67591369]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T14:21:09.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:09 vm04 bash[19581]: audit 2026-03-09T14:21:08.057732+0000 mon.a (mon.0) 524 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:21:09.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:09 vm04 bash[19581]: audit 2026-03-09T14:21:08.057732+0000 mon.a (mon.0) 524 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:21:09.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:09 vm05 bash[20070]: audit 2026-03-09T14:21:08.055919+0000 mon.a (mon.0) 521 : audit [INF] from='osd.5 [v2:192.168.123.105:6800/67591369,v1:192.168.123.105:6801/67591369]' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T14:21:09.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:09 vm05 bash[20070]: audit 2026-03-09T14:21:08.055919+0000 mon.a (mon.0) 521 : audit [INF] from='osd.5 [v2:192.168.123.105:6800/67591369,v1:192.168.123.105:6801/67591369]' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T14:21:09.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:09 vm05 bash[20070]: cluster 2026-03-09T14:21:08.057482+0000 mon.a (mon.0) 522 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-09T14:21:09.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:09 vm05 bash[20070]: cluster 2026-03-09T14:21:08.057482+0000 mon.a (mon.0) 522 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-09T14:21:09.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:09 vm05 bash[20070]: audit 2026-03-09T14:21:08.057646+0000 mon.a (mon.0) 523 : audit [INF] from='osd.5 [v2:192.168.123.105:6800/67591369,v1:192.168.123.105:6801/67591369]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T14:21:09.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:09 vm05 bash[20070]: audit 2026-03-09T14:21:08.057646+0000 mon.a (mon.0) 523 : audit [INF] from='osd.5 [v2:192.168.123.105:6800/67591369,v1:192.168.123.105:6801/67591369]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T14:21:09.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:09 vm05 bash[20070]: audit 2026-03-09T14:21:08.057732+0000 mon.a (mon.0) 524 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:21:09.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:09 vm05 bash[20070]: audit 2026-03-09T14:21:08.057732+0000 mon.a (mon.0) 524 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:21:09.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:09 vm03 bash[17524]: audit 2026-03-09T14:21:08.055919+0000 mon.a (mon.0) 521 : audit [INF] from='osd.5 [v2:192.168.123.105:6800/67591369,v1:192.168.123.105:6801/67591369]' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T14:21:09.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:09 vm03 bash[17524]: audit 2026-03-09T14:21:08.055919+0000 mon.a (mon.0) 521 : audit [INF] from='osd.5 [v2:192.168.123.105:6800/67591369,v1:192.168.123.105:6801/67591369]' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T14:21:09.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:09 vm03 bash[17524]: cluster 2026-03-09T14:21:08.057482+0000 mon.a (mon.0) 522 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-09T14:21:09.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:09 vm03 bash[17524]: cluster 2026-03-09T14:21:08.057482+0000 mon.a (mon.0) 522 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-09T14:21:09.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:09 vm03 bash[17524]: audit 2026-03-09T14:21:08.057646+0000 mon.a (mon.0) 523 : audit [INF] from='osd.5 [v2:192.168.123.105:6800/67591369,v1:192.168.123.105:6801/67591369]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T14:21:09.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:09 vm03 bash[17524]: audit 2026-03-09T14:21:08.057646+0000 mon.a (mon.0) 523 : audit [INF] from='osd.5 [v2:192.168.123.105:6800/67591369,v1:192.168.123.105:6801/67591369]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T14:21:09.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:09 vm03 bash[17524]: audit 2026-03-09T14:21:08.057732+0000 mon.a (mon.0) 524 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:21:09.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:09 vm03 bash[17524]: audit 2026-03-09T14:21:08.057732+0000 mon.a (mon.0) 524 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:21:10.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:10 vm04 bash[19581]: cluster 2026-03-09T14:21:08.934241+0000 mgr.x (mgr.14150) 190 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:10.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:10 vm04 bash[19581]: cluster 2026-03-09T14:21:08.934241+0000 mgr.x (mgr.14150) 190 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:10.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:10 vm04 bash[19581]: audit 2026-03-09T14:21:09.059001+0000 mon.a (mon.0) 525 : audit [INF] from='osd.5 [v2:192.168.123.105:6800/67591369,v1:192.168.123.105:6801/67591369]' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-09T14:21:10.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:10 vm04 bash[19581]: audit 2026-03-09T14:21:09.059001+0000 mon.a (mon.0) 525 : audit [INF] from='osd.5 [v2:192.168.123.105:6800/67591369,v1:192.168.123.105:6801/67591369]' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-09T14:21:10.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:10 vm04 bash[19581]: cluster 2026-03-09T14:21:09.060554+0000 mon.a (mon.0) 526 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-09T14:21:10.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:10 vm04 bash[19581]: cluster 2026-03-09T14:21:09.060554+0000 mon.a (mon.0) 526 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-09T14:21:10.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:10 vm04 bash[19581]: audit 2026-03-09T14:21:09.061558+0000 mon.a (mon.0) 527 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:21:10.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:10 vm04 bash[19581]: audit 2026-03-09T14:21:09.061558+0000 mon.a (mon.0) 527 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:21:10.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:10 vm04 bash[19581]: audit 2026-03-09T14:21:09.064072+0000 mon.a (mon.0) 528 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:21:10.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:10 vm04 bash[19581]: audit 2026-03-09T14:21:09.064072+0000 mon.a (mon.0) 528 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:21:10.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:10 vm04 bash[19581]: cluster 2026-03-09T14:21:09.256055+0000 mon.a (mon.0) 529 : cluster [DBG] osdmap e36: 6 total, 5 up, 6 in 2026-03-09T14:21:10.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:10 vm04 bash[19581]: cluster 2026-03-09T14:21:09.256055+0000 mon.a (mon.0) 529 : cluster [DBG] osdmap e36: 6 total, 5 up, 6 in 2026-03-09T14:21:10.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:10 vm04 bash[19581]: audit 2026-03-09T14:21:09.256234+0000 mon.a (mon.0) 530 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:21:10.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:10 vm04 bash[19581]: audit 2026-03-09T14:21:09.256234+0000 mon.a (mon.0) 530 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:21:10.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:10 vm04 bash[19581]: audit 2026-03-09T14:21:09.888747+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:10.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:10 vm04 bash[19581]: audit 2026-03-09T14:21:09.888747+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:10.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:10 vm04 bash[19581]: audit 2026-03-09T14:21:09.895126+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:10.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:10 vm04 bash[19581]: audit 2026-03-09T14:21:09.895126+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:10.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:10 vm04 bash[19581]: audit 2026-03-09T14:21:09.896646+0000 mon.a (mon.0) 533 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:10.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:10 vm04 bash[19581]: audit 2026-03-09T14:21:09.896646+0000 mon.a (mon.0) 533 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:10.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:10 vm04 bash[19581]: audit 2026-03-09T14:21:09.897208+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:21:10.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:10 vm04 bash[19581]: audit 2026-03-09T14:21:09.897208+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:21:10.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:10 vm04 bash[19581]: audit 2026-03-09T14:21:09.905193+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:10.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:10 vm04 bash[19581]: audit 2026-03-09T14:21:09.905193+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:10.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:10 vm04 bash[19581]: audit 2026-03-09T14:21:09.958245+0000 mon.a (mon.0) 536 : audit [INF] from='osd.5 [v2:192.168.123.105:6800/67591369,v1:192.168.123.105:6801/67591369]' entity='osd.5' 2026-03-09T14:21:10.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:10 vm04 bash[19581]: audit 2026-03-09T14:21:09.958245+0000 mon.a (mon.0) 536 : audit [INF] from='osd.5 [v2:192.168.123.105:6800/67591369,v1:192.168.123.105:6801/67591369]' entity='osd.5' 2026-03-09T14:21:10.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:10 vm04 bash[19581]: audit 2026-03-09T14:21:10.063716+0000 mon.a (mon.0) 537 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:21:10.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:10 vm04 bash[19581]: audit 2026-03-09T14:21:10.063716+0000 mon.a (mon.0) 537 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:21:10.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:10 vm05 bash[20070]: cluster 2026-03-09T14:21:08.934241+0000 mgr.x (mgr.14150) 190 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:10.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:10 vm05 bash[20070]: cluster 2026-03-09T14:21:08.934241+0000 mgr.x (mgr.14150) 190 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:10.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:10 vm05 bash[20070]: audit 2026-03-09T14:21:09.059001+0000 mon.a (mon.0) 525 : audit [INF] from='osd.5 [v2:192.168.123.105:6800/67591369,v1:192.168.123.105:6801/67591369]' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-09T14:21:10.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:10 vm05 bash[20070]: audit 2026-03-09T14:21:09.059001+0000 mon.a (mon.0) 525 : audit [INF] from='osd.5 [v2:192.168.123.105:6800/67591369,v1:192.168.123.105:6801/67591369]' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-09T14:21:10.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:10 vm05 bash[20070]: cluster 2026-03-09T14:21:09.060554+0000 mon.a (mon.0) 526 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-09T14:21:10.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:10 vm05 bash[20070]: cluster 2026-03-09T14:21:09.060554+0000 mon.a (mon.0) 526 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-09T14:21:10.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:10 vm05 bash[20070]: audit 2026-03-09T14:21:09.061558+0000 mon.a (mon.0) 527 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:21:10.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:10 vm05 bash[20070]: audit 2026-03-09T14:21:09.061558+0000 mon.a (mon.0) 527 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:21:10.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:10 vm05 bash[20070]: audit 2026-03-09T14:21:09.064072+0000 mon.a (mon.0) 528 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:21:10.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:10 vm05 bash[20070]: audit 2026-03-09T14:21:09.064072+0000 mon.a (mon.0) 528 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:21:10.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:10 vm05 bash[20070]: cluster 2026-03-09T14:21:09.256055+0000 mon.a (mon.0) 529 : cluster [DBG] osdmap e36: 6 total, 5 up, 6 in 2026-03-09T14:21:10.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:10 vm05 bash[20070]: cluster 2026-03-09T14:21:09.256055+0000 mon.a (mon.0) 529 : cluster [DBG] osdmap e36: 6 total, 5 up, 6 in 2026-03-09T14:21:10.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:10 vm05 bash[20070]: audit 2026-03-09T14:21:09.256234+0000 mon.a (mon.0) 530 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:21:10.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:10 vm05 bash[20070]: audit 2026-03-09T14:21:09.256234+0000 mon.a (mon.0) 530 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:21:10.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:10 vm05 bash[20070]: audit 2026-03-09T14:21:09.888747+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:10.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:10 vm05 bash[20070]: audit 2026-03-09T14:21:09.888747+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:10.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:10 vm05 bash[20070]: audit 2026-03-09T14:21:09.895126+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:10.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:10 vm05 bash[20070]: audit 2026-03-09T14:21:09.895126+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:10.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:10 vm05 bash[20070]: audit 2026-03-09T14:21:09.896646+0000 mon.a (mon.0) 533 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:10.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:10 vm05 bash[20070]: audit 2026-03-09T14:21:09.896646+0000 mon.a (mon.0) 533 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:10.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:10 vm05 bash[20070]: audit 2026-03-09T14:21:09.897208+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:21:10.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:10 vm05 bash[20070]: audit 2026-03-09T14:21:09.897208+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:21:10.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:10 vm05 bash[20070]: audit 2026-03-09T14:21:09.905193+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:10.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:10 vm05 bash[20070]: audit 2026-03-09T14:21:09.905193+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:10.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:10 vm05 bash[20070]: audit 2026-03-09T14:21:09.958245+0000 mon.a (mon.0) 536 : audit [INF] from='osd.5 [v2:192.168.123.105:6800/67591369,v1:192.168.123.105:6801/67591369]' entity='osd.5' 2026-03-09T14:21:10.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:10 vm05 bash[20070]: audit 2026-03-09T14:21:09.958245+0000 mon.a (mon.0) 536 : audit [INF] from='osd.5 [v2:192.168.123.105:6800/67591369,v1:192.168.123.105:6801/67591369]' entity='osd.5' 2026-03-09T14:21:10.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:10 vm05 bash[20070]: audit 2026-03-09T14:21:10.063716+0000 mon.a (mon.0) 537 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:21:10.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:10 vm05 bash[20070]: audit 2026-03-09T14:21:10.063716+0000 mon.a (mon.0) 537 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:21:10.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:10 vm03 bash[17524]: cluster 2026-03-09T14:21:08.934241+0000 mgr.x (mgr.14150) 190 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:10.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:10 vm03 bash[17524]: cluster 2026-03-09T14:21:08.934241+0000 mgr.x (mgr.14150) 190 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:10.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:10 vm03 bash[17524]: audit 2026-03-09T14:21:09.059001+0000 mon.a (mon.0) 525 : audit [INF] from='osd.5 [v2:192.168.123.105:6800/67591369,v1:192.168.123.105:6801/67591369]' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-09T14:21:10.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:10 vm03 bash[17524]: audit 2026-03-09T14:21:09.059001+0000 mon.a (mon.0) 525 : audit [INF] from='osd.5 [v2:192.168.123.105:6800/67591369,v1:192.168.123.105:6801/67591369]' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-09T14:21:10.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:10 vm03 bash[17524]: cluster 2026-03-09T14:21:09.060554+0000 mon.a (mon.0) 526 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-09T14:21:10.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:10 vm03 bash[17524]: cluster 2026-03-09T14:21:09.060554+0000 mon.a (mon.0) 526 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-09T14:21:10.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:10 vm03 bash[17524]: audit 2026-03-09T14:21:09.061558+0000 mon.a (mon.0) 527 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:21:10.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:10 vm03 bash[17524]: audit 2026-03-09T14:21:09.061558+0000 mon.a (mon.0) 527 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:21:10.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:10 vm03 bash[17524]: audit 2026-03-09T14:21:09.064072+0000 mon.a (mon.0) 528 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:21:10.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:10 vm03 bash[17524]: audit 2026-03-09T14:21:09.064072+0000 mon.a (mon.0) 528 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:21:10.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:10 vm03 bash[17524]: cluster 2026-03-09T14:21:09.256055+0000 mon.a (mon.0) 529 : cluster [DBG] osdmap e36: 6 total, 5 up, 6 in 2026-03-09T14:21:10.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:10 vm03 bash[17524]: cluster 2026-03-09T14:21:09.256055+0000 mon.a (mon.0) 529 : cluster [DBG] osdmap e36: 6 total, 5 up, 6 in 2026-03-09T14:21:10.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:10 vm03 bash[17524]: audit 2026-03-09T14:21:09.256234+0000 mon.a (mon.0) 530 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:21:10.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:10 vm03 bash[17524]: audit 2026-03-09T14:21:09.256234+0000 mon.a (mon.0) 530 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:21:10.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:10 vm03 bash[17524]: audit 2026-03-09T14:21:09.888747+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:10.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:10 vm03 bash[17524]: audit 2026-03-09T14:21:09.888747+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:10.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:10 vm03 bash[17524]: audit 2026-03-09T14:21:09.895126+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:10.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:10 vm03 bash[17524]: audit 2026-03-09T14:21:09.895126+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:10.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:10 vm03 bash[17524]: audit 2026-03-09T14:21:09.896646+0000 mon.a (mon.0) 533 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:10.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:10 vm03 bash[17524]: audit 2026-03-09T14:21:09.896646+0000 mon.a (mon.0) 533 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:10.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:10 vm03 bash[17524]: audit 2026-03-09T14:21:09.897208+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:21:10.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:10 vm03 bash[17524]: audit 2026-03-09T14:21:09.897208+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:21:10.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:10 vm03 bash[17524]: audit 2026-03-09T14:21:09.905193+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:10.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:10 vm03 bash[17524]: audit 2026-03-09T14:21:09.905193+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:10.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:10 vm03 bash[17524]: audit 2026-03-09T14:21:09.958245+0000 mon.a (mon.0) 536 : audit [INF] from='osd.5 [v2:192.168.123.105:6800/67591369,v1:192.168.123.105:6801/67591369]' entity='osd.5' 2026-03-09T14:21:10.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:10 vm03 bash[17524]: audit 2026-03-09T14:21:09.958245+0000 mon.a (mon.0) 536 : audit [INF] from='osd.5 [v2:192.168.123.105:6800/67591369,v1:192.168.123.105:6801/67591369]' entity='osd.5' 2026-03-09T14:21:10.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:10 vm03 bash[17524]: audit 2026-03-09T14:21:10.063716+0000 mon.a (mon.0) 537 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:21:10.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:10 vm03 bash[17524]: audit 2026-03-09T14:21:10.063716+0000 mon.a (mon.0) 537 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:21:10.908 INFO:teuthology.orchestra.run.vm05.stdout:Created osd(s) 5 on host 'vm05' 2026-03-09T14:21:10.972 DEBUG:teuthology.orchestra.run.vm05:osd.5> sudo journalctl -f -n 0 -u ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@osd.5.service 2026-03-09T14:21:10.973 INFO:tasks.cephadm:Deploying osd.6 on vm05 with /dev/vdd... 2026-03-09T14:21:10.973 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- lvm zap /dev/vdd 2026-03-09T14:21:11.259 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:11 vm05 bash[20070]: cluster 2026-03-09T14:21:08.079835+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:21:11.259 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:11 vm05 bash[20070]: cluster 2026-03-09T14:21:08.079835+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:21:11.259 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:11 vm05 bash[20070]: cluster 2026-03-09T14:21:08.079873+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:21:11.259 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:11 vm05 bash[20070]: cluster 2026-03-09T14:21:08.079873+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:21:11.259 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:11 vm05 bash[20070]: cluster 2026-03-09T14:21:10.258402+0000 mon.a (mon.0) 538 : cluster [INF] osd.5 [v2:192.168.123.105:6800/67591369,v1:192.168.123.105:6801/67591369] boot 2026-03-09T14:21:11.259 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:11 vm05 bash[20070]: cluster 2026-03-09T14:21:10.258402+0000 mon.a (mon.0) 538 : cluster [INF] osd.5 [v2:192.168.123.105:6800/67591369,v1:192.168.123.105:6801/67591369] boot 2026-03-09T14:21:11.259 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:11 vm05 bash[20070]: cluster 2026-03-09T14:21:10.258539+0000 mon.a (mon.0) 539 : cluster [DBG] osdmap e37: 6 total, 6 up, 6 in 2026-03-09T14:21:11.259 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:11 vm05 bash[20070]: cluster 2026-03-09T14:21:10.258539+0000 mon.a (mon.0) 539 : cluster [DBG] osdmap e37: 6 total, 6 up, 6 in 2026-03-09T14:21:11.259 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:11 vm05 bash[20070]: audit 2026-03-09T14:21:10.258772+0000 mon.a (mon.0) 540 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:21:11.259 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:11 vm05 bash[20070]: audit 2026-03-09T14:21:10.258772+0000 mon.a (mon.0) 540 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:21:11.259 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:11 vm05 bash[20070]: audit 2026-03-09T14:21:10.897976+0000 mon.a (mon.0) 541 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:21:11.259 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:11 vm05 bash[20070]: audit 2026-03-09T14:21:10.897976+0000 mon.a (mon.0) 541 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:21:11.259 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:11 vm05 bash[20070]: audit 2026-03-09T14:21:10.903012+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:11.259 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:11 vm05 bash[20070]: audit 2026-03-09T14:21:10.903012+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:11.259 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:11 vm05 bash[20070]: audit 2026-03-09T14:21:10.906150+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:11.259 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:11 vm05 bash[20070]: audit 2026-03-09T14:21:10.906150+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:11.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:11 vm04 bash[19581]: cluster 2026-03-09T14:21:08.079835+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:21:11.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:11 vm04 bash[19581]: cluster 2026-03-09T14:21:08.079835+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:21:11.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:11 vm04 bash[19581]: cluster 2026-03-09T14:21:08.079873+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:21:11.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:11 vm04 bash[19581]: cluster 2026-03-09T14:21:08.079873+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:21:11.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:11 vm04 bash[19581]: cluster 2026-03-09T14:21:10.258402+0000 mon.a (mon.0) 538 : cluster [INF] osd.5 [v2:192.168.123.105:6800/67591369,v1:192.168.123.105:6801/67591369] boot 2026-03-09T14:21:11.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:11 vm04 bash[19581]: cluster 2026-03-09T14:21:10.258402+0000 mon.a (mon.0) 538 : cluster [INF] osd.5 [v2:192.168.123.105:6800/67591369,v1:192.168.123.105:6801/67591369] boot 2026-03-09T14:21:11.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:11 vm04 bash[19581]: cluster 2026-03-09T14:21:10.258539+0000 mon.a (mon.0) 539 : cluster [DBG] osdmap e37: 6 total, 6 up, 6 in 2026-03-09T14:21:11.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:11 vm04 bash[19581]: cluster 2026-03-09T14:21:10.258539+0000 mon.a (mon.0) 539 : cluster [DBG] osdmap e37: 6 total, 6 up, 6 in 2026-03-09T14:21:11.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:11 vm04 bash[19581]: audit 2026-03-09T14:21:10.258772+0000 mon.a (mon.0) 540 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:21:11.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:11 vm04 bash[19581]: audit 2026-03-09T14:21:10.258772+0000 mon.a (mon.0) 540 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:21:11.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:11 vm04 bash[19581]: audit 2026-03-09T14:21:10.897976+0000 mon.a (mon.0) 541 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:21:11.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:11 vm04 bash[19581]: audit 2026-03-09T14:21:10.897976+0000 mon.a (mon.0) 541 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:21:11.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:11 vm04 bash[19581]: audit 2026-03-09T14:21:10.903012+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:11.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:11 vm04 bash[19581]: audit 2026-03-09T14:21:10.903012+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:11.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:11 vm04 bash[19581]: audit 2026-03-09T14:21:10.906150+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:11.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:11 vm04 bash[19581]: audit 2026-03-09T14:21:10.906150+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:11.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:11 vm03 bash[17524]: cluster 2026-03-09T14:21:08.079835+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:21:11.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:11 vm03 bash[17524]: cluster 2026-03-09T14:21:08.079835+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:21:11.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:11 vm03 bash[17524]: cluster 2026-03-09T14:21:08.079873+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:21:11.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:11 vm03 bash[17524]: cluster 2026-03-09T14:21:08.079873+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:21:11.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:11 vm03 bash[17524]: cluster 2026-03-09T14:21:10.258402+0000 mon.a (mon.0) 538 : cluster [INF] osd.5 [v2:192.168.123.105:6800/67591369,v1:192.168.123.105:6801/67591369] boot 2026-03-09T14:21:11.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:11 vm03 bash[17524]: cluster 2026-03-09T14:21:10.258402+0000 mon.a (mon.0) 538 : cluster [INF] osd.5 [v2:192.168.123.105:6800/67591369,v1:192.168.123.105:6801/67591369] boot 2026-03-09T14:21:11.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:11 vm03 bash[17524]: cluster 2026-03-09T14:21:10.258539+0000 mon.a (mon.0) 539 : cluster [DBG] osdmap e37: 6 total, 6 up, 6 in 2026-03-09T14:21:11.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:11 vm03 bash[17524]: cluster 2026-03-09T14:21:10.258539+0000 mon.a (mon.0) 539 : cluster [DBG] osdmap e37: 6 total, 6 up, 6 in 2026-03-09T14:21:11.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:11 vm03 bash[17524]: audit 2026-03-09T14:21:10.258772+0000 mon.a (mon.0) 540 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:21:11.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:11 vm03 bash[17524]: audit 2026-03-09T14:21:10.258772+0000 mon.a (mon.0) 540 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:21:11.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:11 vm03 bash[17524]: audit 2026-03-09T14:21:10.897976+0000 mon.a (mon.0) 541 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:21:11.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:11 vm03 bash[17524]: audit 2026-03-09T14:21:10.897976+0000 mon.a (mon.0) 541 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:21:11.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:11 vm03 bash[17524]: audit 2026-03-09T14:21:10.903012+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:11.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:11 vm03 bash[17524]: audit 2026-03-09T14:21:10.903012+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:11.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:11 vm03 bash[17524]: audit 2026-03-09T14:21:10.906150+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:11.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:11 vm03 bash[17524]: audit 2026-03-09T14:21:10.906150+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:12.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:12 vm03 bash[17524]: cluster 2026-03-09T14:21:10.934427+0000 mgr.x (mgr.14150) 191 : cluster [DBG] pgmap v153: 1 pgs: 1 peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:12.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:12 vm03 bash[17524]: cluster 2026-03-09T14:21:10.934427+0000 mgr.x (mgr.14150) 191 : cluster [DBG] pgmap v153: 1 pgs: 1 peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:12.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:12 vm03 bash[17524]: cluster 2026-03-09T14:21:11.259374+0000 mon.a (mon.0) 544 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-09T14:21:12.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:12 vm03 bash[17524]: cluster 2026-03-09T14:21:11.259374+0000 mon.a (mon.0) 544 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-09T14:21:12.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:12 vm04 bash[19581]: cluster 2026-03-09T14:21:10.934427+0000 mgr.x (mgr.14150) 191 : cluster [DBG] pgmap v153: 1 pgs: 1 peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:12.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:12 vm04 bash[19581]: cluster 2026-03-09T14:21:10.934427+0000 mgr.x (mgr.14150) 191 : cluster [DBG] pgmap v153: 1 pgs: 1 peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:12.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:12 vm04 bash[19581]: cluster 2026-03-09T14:21:11.259374+0000 mon.a (mon.0) 544 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-09T14:21:12.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:12 vm04 bash[19581]: cluster 2026-03-09T14:21:11.259374+0000 mon.a (mon.0) 544 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-09T14:21:12.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:12 vm05 bash[20070]: cluster 2026-03-09T14:21:10.934427+0000 mgr.x (mgr.14150) 191 : cluster [DBG] pgmap v153: 1 pgs: 1 peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:12.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:12 vm05 bash[20070]: cluster 2026-03-09T14:21:10.934427+0000 mgr.x (mgr.14150) 191 : cluster [DBG] pgmap v153: 1 pgs: 1 peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:21:12.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:12 vm05 bash[20070]: cluster 2026-03-09T14:21:11.259374+0000 mon.a (mon.0) 544 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-09T14:21:12.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:12 vm05 bash[20070]: cluster 2026-03-09T14:21:11.259374+0000 mon.a (mon.0) 544 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-09T14:21:13.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:13 vm03 bash[17524]: cluster 2026-03-09T14:21:12.267041+0000 mon.a (mon.0) 545 : cluster [DBG] osdmap e39: 6 total, 6 up, 6 in 2026-03-09T14:21:13.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:13 vm03 bash[17524]: cluster 2026-03-09T14:21:12.267041+0000 mon.a (mon.0) 545 : cluster [DBG] osdmap e39: 6 total, 6 up, 6 in 2026-03-09T14:21:13.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:13 vm04 bash[19581]: cluster 2026-03-09T14:21:12.267041+0000 mon.a (mon.0) 545 : cluster [DBG] osdmap e39: 6 total, 6 up, 6 in 2026-03-09T14:21:13.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:13 vm04 bash[19581]: cluster 2026-03-09T14:21:12.267041+0000 mon.a (mon.0) 545 : cluster [DBG] osdmap e39: 6 total, 6 up, 6 in 2026-03-09T14:21:13.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:13 vm05 bash[20070]: cluster 2026-03-09T14:21:12.267041+0000 mon.a (mon.0) 545 : cluster [DBG] osdmap e39: 6 total, 6 up, 6 in 2026-03-09T14:21:13.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:13 vm05 bash[20070]: cluster 2026-03-09T14:21:12.267041+0000 mon.a (mon.0) 545 : cluster [DBG] osdmap e39: 6 total, 6 up, 6 in 2026-03-09T14:21:14.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:14 vm03 bash[17524]: cluster 2026-03-09T14:21:12.934717+0000 mgr.x (mgr.14150) 192 : cluster [DBG] pgmap v156: 1 pgs: 1 peering; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-09T14:21:14.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:14 vm03 bash[17524]: cluster 2026-03-09T14:21:12.934717+0000 mgr.x (mgr.14150) 192 : cluster [DBG] pgmap v156: 1 pgs: 1 peering; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-09T14:21:14.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:14 vm04 bash[19581]: cluster 2026-03-09T14:21:12.934717+0000 mgr.x (mgr.14150) 192 : cluster [DBG] pgmap v156: 1 pgs: 1 peering; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-09T14:21:14.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:14 vm04 bash[19581]: cluster 2026-03-09T14:21:12.934717+0000 mgr.x (mgr.14150) 192 : cluster [DBG] pgmap v156: 1 pgs: 1 peering; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-09T14:21:14.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:14 vm05 bash[20070]: cluster 2026-03-09T14:21:12.934717+0000 mgr.x (mgr.14150) 192 : cluster [DBG] pgmap v156: 1 pgs: 1 peering; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-09T14:21:14.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:14 vm05 bash[20070]: cluster 2026-03-09T14:21:12.934717+0000 mgr.x (mgr.14150) 192 : cluster [DBG] pgmap v156: 1 pgs: 1 peering; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-09T14:21:15.623 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.c/config 2026-03-09T14:21:16.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:16 vm03 bash[17524]: cluster 2026-03-09T14:21:14.935022+0000 mgr.x (mgr.14150) 193 : cluster [DBG] pgmap v157: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail; 79 KiB/s, 0 objects/s recovering 2026-03-09T14:21:16.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:16 vm03 bash[17524]: cluster 2026-03-09T14:21:14.935022+0000 mgr.x (mgr.14150) 193 : cluster [DBG] pgmap v157: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail; 79 KiB/s, 0 objects/s recovering 2026-03-09T14:21:16.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:16 vm04 bash[19581]: cluster 2026-03-09T14:21:14.935022+0000 mgr.x (mgr.14150) 193 : cluster [DBG] pgmap v157: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail; 79 KiB/s, 0 objects/s recovering 2026-03-09T14:21:16.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:16 vm04 bash[19581]: cluster 2026-03-09T14:21:14.935022+0000 mgr.x (mgr.14150) 193 : cluster [DBG] pgmap v157: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail; 79 KiB/s, 0 objects/s recovering 2026-03-09T14:21:16.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:16 vm05 bash[20070]: cluster 2026-03-09T14:21:14.935022+0000 mgr.x (mgr.14150) 193 : cluster [DBG] pgmap v157: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail; 79 KiB/s, 0 objects/s recovering 2026-03-09T14:21:16.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:16 vm05 bash[20070]: cluster 2026-03-09T14:21:14.935022+0000 mgr.x (mgr.14150) 193 : cluster [DBG] pgmap v157: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail; 79 KiB/s, 0 objects/s recovering 2026-03-09T14:21:17.091 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:21:17.104 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph orch daemon add osd vm05:/dev/vdd 2026-03-09T14:21:17.433 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:17 vm05 bash[20070]: cephadm 2026-03-09T14:21:16.425517+0000 mgr.x (mgr.14150) 194 : cephadm [INF] Detected new or changed devices on vm05 2026-03-09T14:21:17.433 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:17 vm05 bash[20070]: cephadm 2026-03-09T14:21:16.425517+0000 mgr.x (mgr.14150) 194 : cephadm [INF] Detected new or changed devices on vm05 2026-03-09T14:21:17.433 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:17 vm05 bash[20070]: audit 2026-03-09T14:21:16.430385+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:17.433 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:17 vm05 bash[20070]: audit 2026-03-09T14:21:16.430385+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:17.433 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:17 vm05 bash[20070]: audit 2026-03-09T14:21:16.433919+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:17.433 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:17 vm05 bash[20070]: audit 2026-03-09T14:21:16.433919+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:17.433 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:17 vm05 bash[20070]: audit 2026-03-09T14:21:16.434783+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:21:17.433 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:17 vm05 bash[20070]: audit 2026-03-09T14:21:16.434783+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:21:17.433 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:17 vm05 bash[20070]: cephadm 2026-03-09T14:21:16.435146+0000 mgr.x (mgr.14150) 195 : cephadm [INF] Adjusting osd_memory_target on vm05 to 4551M 2026-03-09T14:21:17.433 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:17 vm05 bash[20070]: cephadm 2026-03-09T14:21:16.435146+0000 mgr.x (mgr.14150) 195 : cephadm [INF] Adjusting osd_memory_target on vm05 to 4551M 2026-03-09T14:21:17.433 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:17 vm05 bash[20070]: audit 2026-03-09T14:21:16.437734+0000 mon.a (mon.0) 549 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:17.433 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:17 vm05 bash[20070]: audit 2026-03-09T14:21:16.437734+0000 mon.a (mon.0) 549 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:17.433 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:17 vm05 bash[20070]: audit 2026-03-09T14:21:16.439201+0000 mon.a (mon.0) 550 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:17.433 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:17 vm05 bash[20070]: audit 2026-03-09T14:21:16.439201+0000 mon.a (mon.0) 550 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:17.433 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:17 vm05 bash[20070]: audit 2026-03-09T14:21:16.439624+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:21:17.433 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:17 vm05 bash[20070]: audit 2026-03-09T14:21:16.439624+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:21:17.433 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:17 vm05 bash[20070]: audit 2026-03-09T14:21:16.443984+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:17.433 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:17 vm05 bash[20070]: audit 2026-03-09T14:21:16.443984+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:17.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:17 vm04 bash[19581]: cephadm 2026-03-09T14:21:16.425517+0000 mgr.x (mgr.14150) 194 : cephadm [INF] Detected new or changed devices on vm05 2026-03-09T14:21:17.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:17 vm04 bash[19581]: cephadm 2026-03-09T14:21:16.425517+0000 mgr.x (mgr.14150) 194 : cephadm [INF] Detected new or changed devices on vm05 2026-03-09T14:21:17.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:17 vm04 bash[19581]: audit 2026-03-09T14:21:16.430385+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:17.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:17 vm04 bash[19581]: audit 2026-03-09T14:21:16.430385+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:17.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:17 vm04 bash[19581]: audit 2026-03-09T14:21:16.433919+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:17.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:17 vm04 bash[19581]: audit 2026-03-09T14:21:16.433919+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:17.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:17 vm04 bash[19581]: audit 2026-03-09T14:21:16.434783+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:21:17.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:17 vm04 bash[19581]: audit 2026-03-09T14:21:16.434783+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:21:17.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:17 vm04 bash[19581]: cephadm 2026-03-09T14:21:16.435146+0000 mgr.x (mgr.14150) 195 : cephadm [INF] Adjusting osd_memory_target on vm05 to 4551M 2026-03-09T14:21:17.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:17 vm04 bash[19581]: cephadm 2026-03-09T14:21:16.435146+0000 mgr.x (mgr.14150) 195 : cephadm [INF] Adjusting osd_memory_target on vm05 to 4551M 2026-03-09T14:21:17.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:17 vm04 bash[19581]: audit 2026-03-09T14:21:16.437734+0000 mon.a (mon.0) 549 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:17.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:17 vm04 bash[19581]: audit 2026-03-09T14:21:16.437734+0000 mon.a (mon.0) 549 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:17.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:17 vm04 bash[19581]: audit 2026-03-09T14:21:16.439201+0000 mon.a (mon.0) 550 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:17.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:17 vm04 bash[19581]: audit 2026-03-09T14:21:16.439201+0000 mon.a (mon.0) 550 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:17.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:17 vm04 bash[19581]: audit 2026-03-09T14:21:16.439624+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:21:17.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:17 vm04 bash[19581]: audit 2026-03-09T14:21:16.439624+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:21:17.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:17 vm04 bash[19581]: audit 2026-03-09T14:21:16.443984+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:17.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:17 vm04 bash[19581]: audit 2026-03-09T14:21:16.443984+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:17.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:17 vm03 bash[17524]: cephadm 2026-03-09T14:21:16.425517+0000 mgr.x (mgr.14150) 194 : cephadm [INF] Detected new or changed devices on vm05 2026-03-09T14:21:17.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:17 vm03 bash[17524]: cephadm 2026-03-09T14:21:16.425517+0000 mgr.x (mgr.14150) 194 : cephadm [INF] Detected new or changed devices on vm05 2026-03-09T14:21:17.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:17 vm03 bash[17524]: audit 2026-03-09T14:21:16.430385+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:17.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:17 vm03 bash[17524]: audit 2026-03-09T14:21:16.430385+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:17.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:17 vm03 bash[17524]: audit 2026-03-09T14:21:16.433919+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:17.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:17 vm03 bash[17524]: audit 2026-03-09T14:21:16.433919+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:17.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:17 vm03 bash[17524]: audit 2026-03-09T14:21:16.434783+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:21:17.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:17 vm03 bash[17524]: audit 2026-03-09T14:21:16.434783+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:21:17.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:17 vm03 bash[17524]: cephadm 2026-03-09T14:21:16.435146+0000 mgr.x (mgr.14150) 195 : cephadm [INF] Adjusting osd_memory_target on vm05 to 4551M 2026-03-09T14:21:17.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:17 vm03 bash[17524]: cephadm 2026-03-09T14:21:16.435146+0000 mgr.x (mgr.14150) 195 : cephadm [INF] Adjusting osd_memory_target on vm05 to 4551M 2026-03-09T14:21:17.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:17 vm03 bash[17524]: audit 2026-03-09T14:21:16.437734+0000 mon.a (mon.0) 549 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:17.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:17 vm03 bash[17524]: audit 2026-03-09T14:21:16.437734+0000 mon.a (mon.0) 549 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:17.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:17 vm03 bash[17524]: audit 2026-03-09T14:21:16.439201+0000 mon.a (mon.0) 550 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:17.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:17 vm03 bash[17524]: audit 2026-03-09T14:21:16.439201+0000 mon.a (mon.0) 550 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:17.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:17 vm03 bash[17524]: audit 2026-03-09T14:21:16.439624+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:21:17.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:17 vm03 bash[17524]: audit 2026-03-09T14:21:16.439624+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:21:17.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:17 vm03 bash[17524]: audit 2026-03-09T14:21:16.443984+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:17.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:17 vm03 bash[17524]: audit 2026-03-09T14:21:16.443984+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:18.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:18 vm04 bash[19581]: cluster 2026-03-09T14:21:16.935298+0000 mgr.x (mgr.14150) 196 : cluster [DBG] pgmap v158: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 67 KiB/s, 0 objects/s recovering 2026-03-09T14:21:18.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:18 vm04 bash[19581]: cluster 2026-03-09T14:21:16.935298+0000 mgr.x (mgr.14150) 196 : cluster [DBG] pgmap v158: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 67 KiB/s, 0 objects/s recovering 2026-03-09T14:21:18.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:18 vm05 bash[20070]: cluster 2026-03-09T14:21:16.935298+0000 mgr.x (mgr.14150) 196 : cluster [DBG] pgmap v158: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 67 KiB/s, 0 objects/s recovering 2026-03-09T14:21:18.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:18 vm05 bash[20070]: cluster 2026-03-09T14:21:16.935298+0000 mgr.x (mgr.14150) 196 : cluster [DBG] pgmap v158: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 67 KiB/s, 0 objects/s recovering 2026-03-09T14:21:18.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:18 vm03 bash[17524]: cluster 2026-03-09T14:21:16.935298+0000 mgr.x (mgr.14150) 196 : cluster [DBG] pgmap v158: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 67 KiB/s, 0 objects/s recovering 2026-03-09T14:21:18.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:18 vm03 bash[17524]: cluster 2026-03-09T14:21:16.935298+0000 mgr.x (mgr.14150) 196 : cluster [DBG] pgmap v158: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 67 KiB/s, 0 objects/s recovering 2026-03-09T14:21:20.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:20 vm04 bash[19581]: cluster 2026-03-09T14:21:18.935594+0000 mgr.x (mgr.14150) 197 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-09T14:21:20.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:20 vm04 bash[19581]: cluster 2026-03-09T14:21:18.935594+0000 mgr.x (mgr.14150) 197 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-09T14:21:20.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:20 vm05 bash[20070]: cluster 2026-03-09T14:21:18.935594+0000 mgr.x (mgr.14150) 197 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-09T14:21:20.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:20 vm05 bash[20070]: cluster 2026-03-09T14:21:18.935594+0000 mgr.x (mgr.14150) 197 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-09T14:21:20.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:20 vm03 bash[17524]: cluster 2026-03-09T14:21:18.935594+0000 mgr.x (mgr.14150) 197 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-09T14:21:20.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:20 vm03 bash[17524]: cluster 2026-03-09T14:21:18.935594+0000 mgr.x (mgr.14150) 197 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-09T14:21:21.704 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.c/config 2026-03-09T14:21:22.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:22 vm04 bash[19581]: cluster 2026-03-09T14:21:20.935843+0000 mgr.x (mgr.14150) 198 : cluster [DBG] pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 46 KiB/s, 0 objects/s recovering 2026-03-09T14:21:22.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:22 vm04 bash[19581]: cluster 2026-03-09T14:21:20.935843+0000 mgr.x (mgr.14150) 198 : cluster [DBG] pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 46 KiB/s, 0 objects/s recovering 2026-03-09T14:21:22.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:22 vm04 bash[19581]: audit 2026-03-09T14:21:21.949556+0000 mon.a (mon.0) 553 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:21:22.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:22 vm04 bash[19581]: audit 2026-03-09T14:21:21.949556+0000 mon.a (mon.0) 553 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:21:22.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:22 vm04 bash[19581]: audit 2026-03-09T14:21:21.950786+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:21:22.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:22 vm04 bash[19581]: audit 2026-03-09T14:21:21.950786+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:21:22.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:22 vm04 bash[19581]: audit 2026-03-09T14:21:21.951138+0000 mon.a (mon.0) 555 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:22.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:22 vm04 bash[19581]: audit 2026-03-09T14:21:21.951138+0000 mon.a (mon.0) 555 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:22.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:22 vm05 bash[20070]: cluster 2026-03-09T14:21:20.935843+0000 mgr.x (mgr.14150) 198 : cluster [DBG] pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 46 KiB/s, 0 objects/s recovering 2026-03-09T14:21:22.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:22 vm05 bash[20070]: cluster 2026-03-09T14:21:20.935843+0000 mgr.x (mgr.14150) 198 : cluster [DBG] pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 46 KiB/s, 0 objects/s recovering 2026-03-09T14:21:22.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:22 vm05 bash[20070]: audit 2026-03-09T14:21:21.949556+0000 mon.a (mon.0) 553 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:21:22.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:22 vm05 bash[20070]: audit 2026-03-09T14:21:21.949556+0000 mon.a (mon.0) 553 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:21:22.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:22 vm05 bash[20070]: audit 2026-03-09T14:21:21.950786+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:21:22.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:22 vm05 bash[20070]: audit 2026-03-09T14:21:21.950786+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:21:22.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:22 vm05 bash[20070]: audit 2026-03-09T14:21:21.951138+0000 mon.a (mon.0) 555 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:22.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:22 vm05 bash[20070]: audit 2026-03-09T14:21:21.951138+0000 mon.a (mon.0) 555 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:22.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:22 vm03 bash[17524]: cluster 2026-03-09T14:21:20.935843+0000 mgr.x (mgr.14150) 198 : cluster [DBG] pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 46 KiB/s, 0 objects/s recovering 2026-03-09T14:21:22.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:22 vm03 bash[17524]: cluster 2026-03-09T14:21:20.935843+0000 mgr.x (mgr.14150) 198 : cluster [DBG] pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 46 KiB/s, 0 objects/s recovering 2026-03-09T14:21:22.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:22 vm03 bash[17524]: audit 2026-03-09T14:21:21.949556+0000 mon.a (mon.0) 553 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:21:22.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:22 vm03 bash[17524]: audit 2026-03-09T14:21:21.949556+0000 mon.a (mon.0) 553 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:21:22.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:22 vm03 bash[17524]: audit 2026-03-09T14:21:21.950786+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:21:22.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:22 vm03 bash[17524]: audit 2026-03-09T14:21:21.950786+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:21:22.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:22 vm03 bash[17524]: audit 2026-03-09T14:21:21.951138+0000 mon.a (mon.0) 555 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:22.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:22 vm03 bash[17524]: audit 2026-03-09T14:21:21.951138+0000 mon.a (mon.0) 555 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:23.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:23 vm04 bash[19581]: audit 2026-03-09T14:21:21.948304+0000 mgr.x (mgr.14150) 199 : audit [DBG] from='client.24244 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:21:23.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:23 vm04 bash[19581]: audit 2026-03-09T14:21:21.948304+0000 mgr.x (mgr.14150) 199 : audit [DBG] from='client.24244 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:21:23.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:23 vm05 bash[20070]: audit 2026-03-09T14:21:21.948304+0000 mgr.x (mgr.14150) 199 : audit [DBG] from='client.24244 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:21:23.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:23 vm05 bash[20070]: audit 2026-03-09T14:21:21.948304+0000 mgr.x (mgr.14150) 199 : audit [DBG] from='client.24244 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:21:23.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:23 vm03 bash[17524]: audit 2026-03-09T14:21:21.948304+0000 mgr.x (mgr.14150) 199 : audit [DBG] from='client.24244 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:21:23.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:23 vm03 bash[17524]: audit 2026-03-09T14:21:21.948304+0000 mgr.x (mgr.14150) 199 : audit [DBG] from='client.24244 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:21:24.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:24 vm04 bash[19581]: cluster 2026-03-09T14:21:22.936099+0000 mgr.x (mgr.14150) 200 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 42 KiB/s, 0 objects/s recovering 2026-03-09T14:21:24.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:24 vm04 bash[19581]: cluster 2026-03-09T14:21:22.936099+0000 mgr.x (mgr.14150) 200 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 42 KiB/s, 0 objects/s recovering 2026-03-09T14:21:24.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:24 vm05 bash[20070]: cluster 2026-03-09T14:21:22.936099+0000 mgr.x (mgr.14150) 200 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 42 KiB/s, 0 objects/s recovering 2026-03-09T14:21:24.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:24 vm05 bash[20070]: cluster 2026-03-09T14:21:22.936099+0000 mgr.x (mgr.14150) 200 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 42 KiB/s, 0 objects/s recovering 2026-03-09T14:21:24.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:24 vm03 bash[17524]: cluster 2026-03-09T14:21:22.936099+0000 mgr.x (mgr.14150) 200 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 42 KiB/s, 0 objects/s recovering 2026-03-09T14:21:24.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:24 vm03 bash[17524]: cluster 2026-03-09T14:21:22.936099+0000 mgr.x (mgr.14150) 200 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 42 KiB/s, 0 objects/s recovering 2026-03-09T14:21:26.728 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:26 vm05 bash[20070]: cluster 2026-03-09T14:21:24.936344+0000 mgr.x (mgr.14150) 201 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T14:21:26.728 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:26 vm05 bash[20070]: cluster 2026-03-09T14:21:24.936344+0000 mgr.x (mgr.14150) 201 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T14:21:26.728 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:26 vm05 bash[20070]: audit 2026-03-09T14:21:26.273473+0000 mon.c (mon.1) 12 : audit [INF] from='client.? 192.168.123.105:0/2042020755' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bf677cce-a472-46ab-9a91-492f3b2e689b"}]: dispatch 2026-03-09T14:21:26.728 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:26 vm05 bash[20070]: audit 2026-03-09T14:21:26.273473+0000 mon.c (mon.1) 12 : audit [INF] from='client.? 192.168.123.105:0/2042020755' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bf677cce-a472-46ab-9a91-492f3b2e689b"}]: dispatch 2026-03-09T14:21:26.728 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:26 vm05 bash[20070]: audit 2026-03-09T14:21:26.274573+0000 mon.a (mon.0) 556 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bf677cce-a472-46ab-9a91-492f3b2e689b"}]: dispatch 2026-03-09T14:21:26.728 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:26 vm05 bash[20070]: audit 2026-03-09T14:21:26.274573+0000 mon.a (mon.0) 556 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bf677cce-a472-46ab-9a91-492f3b2e689b"}]: dispatch 2026-03-09T14:21:26.728 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:26 vm05 bash[20070]: audit 2026-03-09T14:21:26.277579+0000 mon.a (mon.0) 557 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "bf677cce-a472-46ab-9a91-492f3b2e689b"}]': finished 2026-03-09T14:21:26.728 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:26 vm05 bash[20070]: audit 2026-03-09T14:21:26.277579+0000 mon.a (mon.0) 557 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "bf677cce-a472-46ab-9a91-492f3b2e689b"}]': finished 2026-03-09T14:21:26.728 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:26 vm05 bash[20070]: cluster 2026-03-09T14:21:26.280077+0000 mon.a (mon.0) 558 : cluster [DBG] osdmap e40: 7 total, 6 up, 7 in 2026-03-09T14:21:26.728 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:26 vm05 bash[20070]: cluster 2026-03-09T14:21:26.280077+0000 mon.a (mon.0) 558 : cluster [DBG] osdmap e40: 7 total, 6 up, 7 in 2026-03-09T14:21:26.728 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:26 vm05 bash[20070]: audit 2026-03-09T14:21:26.280218+0000 mon.a (mon.0) 559 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:21:26.728 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:26 vm05 bash[20070]: audit 2026-03-09T14:21:26.280218+0000 mon.a (mon.0) 559 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:21:26.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:26 vm04 bash[19581]: cluster 2026-03-09T14:21:24.936344+0000 mgr.x (mgr.14150) 201 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T14:21:26.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:26 vm04 bash[19581]: cluster 2026-03-09T14:21:24.936344+0000 mgr.x (mgr.14150) 201 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T14:21:26.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:26 vm04 bash[19581]: audit 2026-03-09T14:21:26.273473+0000 mon.c (mon.1) 12 : audit [INF] from='client.? 192.168.123.105:0/2042020755' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bf677cce-a472-46ab-9a91-492f3b2e689b"}]: dispatch 2026-03-09T14:21:26.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:26 vm04 bash[19581]: audit 2026-03-09T14:21:26.273473+0000 mon.c (mon.1) 12 : audit [INF] from='client.? 192.168.123.105:0/2042020755' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bf677cce-a472-46ab-9a91-492f3b2e689b"}]: dispatch 2026-03-09T14:21:26.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:26 vm04 bash[19581]: audit 2026-03-09T14:21:26.274573+0000 mon.a (mon.0) 556 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bf677cce-a472-46ab-9a91-492f3b2e689b"}]: dispatch 2026-03-09T14:21:26.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:26 vm04 bash[19581]: audit 2026-03-09T14:21:26.274573+0000 mon.a (mon.0) 556 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bf677cce-a472-46ab-9a91-492f3b2e689b"}]: dispatch 2026-03-09T14:21:26.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:26 vm04 bash[19581]: audit 2026-03-09T14:21:26.277579+0000 mon.a (mon.0) 557 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "bf677cce-a472-46ab-9a91-492f3b2e689b"}]': finished 2026-03-09T14:21:26.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:26 vm04 bash[19581]: audit 2026-03-09T14:21:26.277579+0000 mon.a (mon.0) 557 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "bf677cce-a472-46ab-9a91-492f3b2e689b"}]': finished 2026-03-09T14:21:26.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:26 vm04 bash[19581]: cluster 2026-03-09T14:21:26.280077+0000 mon.a (mon.0) 558 : cluster [DBG] osdmap e40: 7 total, 6 up, 7 in 2026-03-09T14:21:26.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:26 vm04 bash[19581]: cluster 2026-03-09T14:21:26.280077+0000 mon.a (mon.0) 558 : cluster [DBG] osdmap e40: 7 total, 6 up, 7 in 2026-03-09T14:21:26.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:26 vm04 bash[19581]: audit 2026-03-09T14:21:26.280218+0000 mon.a (mon.0) 559 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:21:26.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:26 vm04 bash[19581]: audit 2026-03-09T14:21:26.280218+0000 mon.a (mon.0) 559 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:21:26.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:26 vm03 bash[17524]: cluster 2026-03-09T14:21:24.936344+0000 mgr.x (mgr.14150) 201 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T14:21:26.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:26 vm03 bash[17524]: cluster 2026-03-09T14:21:24.936344+0000 mgr.x (mgr.14150) 201 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-09T14:21:26.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:26 vm03 bash[17524]: audit 2026-03-09T14:21:26.273473+0000 mon.c (mon.1) 12 : audit [INF] from='client.? 192.168.123.105:0/2042020755' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bf677cce-a472-46ab-9a91-492f3b2e689b"}]: dispatch 2026-03-09T14:21:26.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:26 vm03 bash[17524]: audit 2026-03-09T14:21:26.273473+0000 mon.c (mon.1) 12 : audit [INF] from='client.? 192.168.123.105:0/2042020755' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bf677cce-a472-46ab-9a91-492f3b2e689b"}]: dispatch 2026-03-09T14:21:26.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:26 vm03 bash[17524]: audit 2026-03-09T14:21:26.274573+0000 mon.a (mon.0) 556 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bf677cce-a472-46ab-9a91-492f3b2e689b"}]: dispatch 2026-03-09T14:21:26.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:26 vm03 bash[17524]: audit 2026-03-09T14:21:26.274573+0000 mon.a (mon.0) 556 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bf677cce-a472-46ab-9a91-492f3b2e689b"}]: dispatch 2026-03-09T14:21:26.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:26 vm03 bash[17524]: audit 2026-03-09T14:21:26.277579+0000 mon.a (mon.0) 557 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "bf677cce-a472-46ab-9a91-492f3b2e689b"}]': finished 2026-03-09T14:21:26.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:26 vm03 bash[17524]: audit 2026-03-09T14:21:26.277579+0000 mon.a (mon.0) 557 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "bf677cce-a472-46ab-9a91-492f3b2e689b"}]': finished 2026-03-09T14:21:26.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:26 vm03 bash[17524]: cluster 2026-03-09T14:21:26.280077+0000 mon.a (mon.0) 558 : cluster [DBG] osdmap e40: 7 total, 6 up, 7 in 2026-03-09T14:21:26.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:26 vm03 bash[17524]: cluster 2026-03-09T14:21:26.280077+0000 mon.a (mon.0) 558 : cluster [DBG] osdmap e40: 7 total, 6 up, 7 in 2026-03-09T14:21:26.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:26 vm03 bash[17524]: audit 2026-03-09T14:21:26.280218+0000 mon.a (mon.0) 559 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:21:26.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:26 vm03 bash[17524]: audit 2026-03-09T14:21:26.280218+0000 mon.a (mon.0) 559 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:21:27.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:27 vm04 bash[19581]: audit 2026-03-09T14:21:26.856226+0000 mon.c (mon.1) 13 : audit [DBG] from='client.? 192.168.123.105:0/4051144067' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:21:27.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:27 vm04 bash[19581]: audit 2026-03-09T14:21:26.856226+0000 mon.c (mon.1) 13 : audit [DBG] from='client.? 192.168.123.105:0/4051144067' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:21:27.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:27 vm05 bash[20070]: audit 2026-03-09T14:21:26.856226+0000 mon.c (mon.1) 13 : audit [DBG] from='client.? 192.168.123.105:0/4051144067' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:21:27.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:27 vm05 bash[20070]: audit 2026-03-09T14:21:26.856226+0000 mon.c (mon.1) 13 : audit [DBG] from='client.? 192.168.123.105:0/4051144067' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:21:27.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:27 vm03 bash[17524]: audit 2026-03-09T14:21:26.856226+0000 mon.c (mon.1) 13 : audit [DBG] from='client.? 192.168.123.105:0/4051144067' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:21:27.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:27 vm03 bash[17524]: audit 2026-03-09T14:21:26.856226+0000 mon.c (mon.1) 13 : audit [DBG] from='client.? 192.168.123.105:0/4051144067' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:21:28.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:28 vm04 bash[19581]: cluster 2026-03-09T14:21:26.936574+0000 mgr.x (mgr.14150) 202 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:28.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:28 vm04 bash[19581]: cluster 2026-03-09T14:21:26.936574+0000 mgr.x (mgr.14150) 202 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:28.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:28 vm05 bash[20070]: cluster 2026-03-09T14:21:26.936574+0000 mgr.x (mgr.14150) 202 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:28.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:28 vm05 bash[20070]: cluster 2026-03-09T14:21:26.936574+0000 mgr.x (mgr.14150) 202 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:28.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:28 vm03 bash[17524]: cluster 2026-03-09T14:21:26.936574+0000 mgr.x (mgr.14150) 202 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:28.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:28 vm03 bash[17524]: cluster 2026-03-09T14:21:26.936574+0000 mgr.x (mgr.14150) 202 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:30.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:30 vm04 bash[19581]: cluster 2026-03-09T14:21:28.936861+0000 mgr.x (mgr.14150) 203 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:30.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:30 vm04 bash[19581]: cluster 2026-03-09T14:21:28.936861+0000 mgr.x (mgr.14150) 203 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:30.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:30 vm05 bash[20070]: cluster 2026-03-09T14:21:28.936861+0000 mgr.x (mgr.14150) 203 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:30.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:30 vm05 bash[20070]: cluster 2026-03-09T14:21:28.936861+0000 mgr.x (mgr.14150) 203 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:30.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:30 vm03 bash[17524]: cluster 2026-03-09T14:21:28.936861+0000 mgr.x (mgr.14150) 203 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:30.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:30 vm03 bash[17524]: cluster 2026-03-09T14:21:28.936861+0000 mgr.x (mgr.14150) 203 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:32.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:32 vm04 bash[19581]: cluster 2026-03-09T14:21:30.937108+0000 mgr.x (mgr.14150) 204 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:32.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:32 vm04 bash[19581]: cluster 2026-03-09T14:21:30.937108+0000 mgr.x (mgr.14150) 204 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:32.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:32 vm05 bash[20070]: cluster 2026-03-09T14:21:30.937108+0000 mgr.x (mgr.14150) 204 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:32.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:32 vm05 bash[20070]: cluster 2026-03-09T14:21:30.937108+0000 mgr.x (mgr.14150) 204 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:32.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:32 vm03 bash[17524]: cluster 2026-03-09T14:21:30.937108+0000 mgr.x (mgr.14150) 204 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:32.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:32 vm03 bash[17524]: cluster 2026-03-09T14:21:30.937108+0000 mgr.x (mgr.14150) 204 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:34.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:34 vm04 bash[19581]: cluster 2026-03-09T14:21:32.937358+0000 mgr.x (mgr.14150) 205 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:34.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:34 vm04 bash[19581]: cluster 2026-03-09T14:21:32.937358+0000 mgr.x (mgr.14150) 205 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:34.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:34 vm05 bash[20070]: cluster 2026-03-09T14:21:32.937358+0000 mgr.x (mgr.14150) 205 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:34.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:34 vm05 bash[20070]: cluster 2026-03-09T14:21:32.937358+0000 mgr.x (mgr.14150) 205 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:34.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:34 vm03 bash[17524]: cluster 2026-03-09T14:21:32.937358+0000 mgr.x (mgr.14150) 205 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:34.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:34 vm03 bash[17524]: cluster 2026-03-09T14:21:32.937358+0000 mgr.x (mgr.14150) 205 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:35.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:35 vm05 bash[20070]: audit 2026-03-09T14:21:35.036983+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T14:21:35.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:35 vm05 bash[20070]: audit 2026-03-09T14:21:35.036983+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T14:21:35.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:35 vm05 bash[20070]: audit 2026-03-09T14:21:35.037467+0000 mon.a (mon.0) 561 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:35.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:35 vm05 bash[20070]: audit 2026-03-09T14:21:35.037467+0000 mon.a (mon.0) 561 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:35.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:35 vm04 bash[19581]: audit 2026-03-09T14:21:35.036983+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T14:21:35.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:35 vm04 bash[19581]: audit 2026-03-09T14:21:35.036983+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T14:21:35.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:35 vm04 bash[19581]: audit 2026-03-09T14:21:35.037467+0000 mon.a (mon.0) 561 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:35.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:35 vm04 bash[19581]: audit 2026-03-09T14:21:35.037467+0000 mon.a (mon.0) 561 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:35.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:35 vm03 bash[17524]: audit 2026-03-09T14:21:35.036983+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T14:21:35.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:35 vm03 bash[17524]: audit 2026-03-09T14:21:35.036983+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T14:21:35.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:35 vm03 bash[17524]: audit 2026-03-09T14:21:35.037467+0000 mon.a (mon.0) 561 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:35.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:35 vm03 bash[17524]: audit 2026-03-09T14:21:35.037467+0000 mon.a (mon.0) 561 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:35.878 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:35 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:21:35.878 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:21:35 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:21:36.188 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:35 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:21:36.188 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:21:35 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:21:36.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:36 vm04 bash[19581]: cluster 2026-03-09T14:21:34.937607+0000 mgr.x (mgr.14150) 206 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:36.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:36 vm04 bash[19581]: cluster 2026-03-09T14:21:34.937607+0000 mgr.x (mgr.14150) 206 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:36.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:36 vm04 bash[19581]: cephadm 2026-03-09T14:21:35.037801+0000 mgr.x (mgr.14150) 207 : cephadm [INF] Deploying daemon osd.6 on vm05 2026-03-09T14:21:36.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:36 vm04 bash[19581]: cephadm 2026-03-09T14:21:35.037801+0000 mgr.x (mgr.14150) 207 : cephadm [INF] Deploying daemon osd.6 on vm05 2026-03-09T14:21:36.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:36 vm04 bash[19581]: audit 2026-03-09T14:21:35.970447+0000 mon.a (mon.0) 562 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:21:36.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:36 vm04 bash[19581]: audit 2026-03-09T14:21:35.970447+0000 mon.a (mon.0) 562 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:21:36.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:36 vm04 bash[19581]: audit 2026-03-09T14:21:35.975479+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:36.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:36 vm04 bash[19581]: audit 2026-03-09T14:21:35.975479+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:36.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:36 vm04 bash[19581]: audit 2026-03-09T14:21:35.979556+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:36.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:36 vm04 bash[19581]: audit 2026-03-09T14:21:35.979556+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:36.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:36 vm05 bash[20070]: cluster 2026-03-09T14:21:34.937607+0000 mgr.x (mgr.14150) 206 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:36.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:36 vm05 bash[20070]: cluster 2026-03-09T14:21:34.937607+0000 mgr.x (mgr.14150) 206 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:36.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:36 vm05 bash[20070]: cephadm 2026-03-09T14:21:35.037801+0000 mgr.x (mgr.14150) 207 : cephadm [INF] Deploying daemon osd.6 on vm05 2026-03-09T14:21:36.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:36 vm05 bash[20070]: cephadm 2026-03-09T14:21:35.037801+0000 mgr.x (mgr.14150) 207 : cephadm [INF] Deploying daemon osd.6 on vm05 2026-03-09T14:21:36.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:36 vm05 bash[20070]: audit 2026-03-09T14:21:35.970447+0000 mon.a (mon.0) 562 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:21:36.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:36 vm05 bash[20070]: audit 2026-03-09T14:21:35.970447+0000 mon.a (mon.0) 562 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:21:36.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:36 vm05 bash[20070]: audit 2026-03-09T14:21:35.975479+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:36.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:36 vm05 bash[20070]: audit 2026-03-09T14:21:35.975479+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:36.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:36 vm05 bash[20070]: audit 2026-03-09T14:21:35.979556+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:36.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:36 vm05 bash[20070]: audit 2026-03-09T14:21:35.979556+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:36.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:36 vm03 bash[17524]: cluster 2026-03-09T14:21:34.937607+0000 mgr.x (mgr.14150) 206 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:36.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:36 vm03 bash[17524]: cluster 2026-03-09T14:21:34.937607+0000 mgr.x (mgr.14150) 206 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:36.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:36 vm03 bash[17524]: cephadm 2026-03-09T14:21:35.037801+0000 mgr.x (mgr.14150) 207 : cephadm [INF] Deploying daemon osd.6 on vm05 2026-03-09T14:21:36.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:36 vm03 bash[17524]: cephadm 2026-03-09T14:21:35.037801+0000 mgr.x (mgr.14150) 207 : cephadm [INF] Deploying daemon osd.6 on vm05 2026-03-09T14:21:36.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:36 vm03 bash[17524]: audit 2026-03-09T14:21:35.970447+0000 mon.a (mon.0) 562 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:21:36.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:36 vm03 bash[17524]: audit 2026-03-09T14:21:35.970447+0000 mon.a (mon.0) 562 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:21:36.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:36 vm03 bash[17524]: audit 2026-03-09T14:21:35.975479+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:36.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:36 vm03 bash[17524]: audit 2026-03-09T14:21:35.975479+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:36.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:36 vm03 bash[17524]: audit 2026-03-09T14:21:35.979556+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:36.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:36 vm03 bash[17524]: audit 2026-03-09T14:21:35.979556+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:38.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:38 vm03 bash[17524]: cluster 2026-03-09T14:21:36.937868+0000 mgr.x (mgr.14150) 208 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:38.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:38 vm03 bash[17524]: cluster 2026-03-09T14:21:36.937868+0000 mgr.x (mgr.14150) 208 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:39.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:38 vm04 bash[19581]: cluster 2026-03-09T14:21:36.937868+0000 mgr.x (mgr.14150) 208 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:39.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:38 vm04 bash[19581]: cluster 2026-03-09T14:21:36.937868+0000 mgr.x (mgr.14150) 208 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:39.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:38 vm05 bash[20070]: cluster 2026-03-09T14:21:36.937868+0000 mgr.x (mgr.14150) 208 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:39.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:38 vm05 bash[20070]: cluster 2026-03-09T14:21:36.937868+0000 mgr.x (mgr.14150) 208 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:39.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:39 vm03 bash[17524]: audit 2026-03-09T14:21:39.214270+0000 mon.c (mon.1) 14 : audit [INF] from='osd.6 [v2:192.168.123.105:6808/1573740685,v1:192.168.123.105:6809/1573740685]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T14:21:39.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:39 vm03 bash[17524]: audit 2026-03-09T14:21:39.214270+0000 mon.c (mon.1) 14 : audit [INF] from='osd.6 [v2:192.168.123.105:6808/1573740685,v1:192.168.123.105:6809/1573740685]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T14:21:39.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:39 vm03 bash[17524]: audit 2026-03-09T14:21:39.215196+0000 mon.a (mon.0) 565 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T14:21:39.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:39 vm03 bash[17524]: audit 2026-03-09T14:21:39.215196+0000 mon.a (mon.0) 565 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T14:21:40.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:39 vm04 bash[19581]: audit 2026-03-09T14:21:39.214270+0000 mon.c (mon.1) 14 : audit [INF] from='osd.6 [v2:192.168.123.105:6808/1573740685,v1:192.168.123.105:6809/1573740685]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T14:21:40.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:39 vm04 bash[19581]: audit 2026-03-09T14:21:39.214270+0000 mon.c (mon.1) 14 : audit [INF] from='osd.6 [v2:192.168.123.105:6808/1573740685,v1:192.168.123.105:6809/1573740685]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T14:21:40.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:39 vm04 bash[19581]: audit 2026-03-09T14:21:39.215196+0000 mon.a (mon.0) 565 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T14:21:40.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:39 vm04 bash[19581]: audit 2026-03-09T14:21:39.215196+0000 mon.a (mon.0) 565 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T14:21:40.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:39 vm05 bash[20070]: audit 2026-03-09T14:21:39.214270+0000 mon.c (mon.1) 14 : audit [INF] from='osd.6 [v2:192.168.123.105:6808/1573740685,v1:192.168.123.105:6809/1573740685]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T14:21:40.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:39 vm05 bash[20070]: audit 2026-03-09T14:21:39.214270+0000 mon.c (mon.1) 14 : audit [INF] from='osd.6 [v2:192.168.123.105:6808/1573740685,v1:192.168.123.105:6809/1573740685]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T14:21:40.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:39 vm05 bash[20070]: audit 2026-03-09T14:21:39.215196+0000 mon.a (mon.0) 565 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T14:21:40.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:39 vm05 bash[20070]: audit 2026-03-09T14:21:39.215196+0000 mon.a (mon.0) 565 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T14:21:40.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:40 vm03 bash[17524]: cluster 2026-03-09T14:21:38.938146+0000 mgr.x (mgr.14150) 209 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:40.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:40 vm03 bash[17524]: cluster 2026-03-09T14:21:38.938146+0000 mgr.x (mgr.14150) 209 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:40.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:40 vm03 bash[17524]: audit 2026-03-09T14:21:39.520114+0000 mon.a (mon.0) 566 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T14:21:40.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:40 vm03 bash[17524]: audit 2026-03-09T14:21:39.520114+0000 mon.a (mon.0) 566 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T14:21:40.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:40 vm03 bash[17524]: audit 2026-03-09T14:21:39.522996+0000 mon.c (mon.1) 15 : audit [INF] from='osd.6 [v2:192.168.123.105:6808/1573740685,v1:192.168.123.105:6809/1573740685]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T14:21:40.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:40 vm03 bash[17524]: audit 2026-03-09T14:21:39.522996+0000 mon.c (mon.1) 15 : audit [INF] from='osd.6 [v2:192.168.123.105:6808/1573740685,v1:192.168.123.105:6809/1573740685]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T14:21:40.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:40 vm03 bash[17524]: cluster 2026-03-09T14:21:39.523423+0000 mon.a (mon.0) 567 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-09T14:21:40.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:40 vm03 bash[17524]: cluster 2026-03-09T14:21:39.523423+0000 mon.a (mon.0) 567 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-09T14:21:40.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:40 vm03 bash[17524]: audit 2026-03-09T14:21:39.523710+0000 mon.a (mon.0) 568 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:21:40.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:40 vm03 bash[17524]: audit 2026-03-09T14:21:39.523710+0000 mon.a (mon.0) 568 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:21:40.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:40 vm03 bash[17524]: audit 2026-03-09T14:21:39.524083+0000 mon.a (mon.0) 569 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T14:21:40.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:40 vm03 bash[17524]: audit 2026-03-09T14:21:39.524083+0000 mon.a (mon.0) 569 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T14:21:41.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:40 vm04 bash[19581]: cluster 2026-03-09T14:21:38.938146+0000 mgr.x (mgr.14150) 209 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:41.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:40 vm04 bash[19581]: cluster 2026-03-09T14:21:38.938146+0000 mgr.x (mgr.14150) 209 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:41.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:40 vm04 bash[19581]: audit 2026-03-09T14:21:39.520114+0000 mon.a (mon.0) 566 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T14:21:41.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:40 vm04 bash[19581]: audit 2026-03-09T14:21:39.520114+0000 mon.a (mon.0) 566 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T14:21:41.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:40 vm04 bash[19581]: audit 2026-03-09T14:21:39.522996+0000 mon.c (mon.1) 15 : audit [INF] from='osd.6 [v2:192.168.123.105:6808/1573740685,v1:192.168.123.105:6809/1573740685]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T14:21:41.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:40 vm04 bash[19581]: audit 2026-03-09T14:21:39.522996+0000 mon.c (mon.1) 15 : audit [INF] from='osd.6 [v2:192.168.123.105:6808/1573740685,v1:192.168.123.105:6809/1573740685]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T14:21:41.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:40 vm04 bash[19581]: cluster 2026-03-09T14:21:39.523423+0000 mon.a (mon.0) 567 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-09T14:21:41.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:40 vm04 bash[19581]: cluster 2026-03-09T14:21:39.523423+0000 mon.a (mon.0) 567 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-09T14:21:41.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:40 vm04 bash[19581]: audit 2026-03-09T14:21:39.523710+0000 mon.a (mon.0) 568 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:21:41.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:40 vm04 bash[19581]: audit 2026-03-09T14:21:39.523710+0000 mon.a (mon.0) 568 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:21:41.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:40 vm04 bash[19581]: audit 2026-03-09T14:21:39.524083+0000 mon.a (mon.0) 569 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T14:21:41.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:40 vm04 bash[19581]: audit 2026-03-09T14:21:39.524083+0000 mon.a (mon.0) 569 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T14:21:41.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:40 vm05 bash[20070]: cluster 2026-03-09T14:21:38.938146+0000 mgr.x (mgr.14150) 209 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:41.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:40 vm05 bash[20070]: cluster 2026-03-09T14:21:38.938146+0000 mgr.x (mgr.14150) 209 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:41.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:40 vm05 bash[20070]: audit 2026-03-09T14:21:39.520114+0000 mon.a (mon.0) 566 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T14:21:41.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:40 vm05 bash[20070]: audit 2026-03-09T14:21:39.520114+0000 mon.a (mon.0) 566 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T14:21:41.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:40 vm05 bash[20070]: audit 2026-03-09T14:21:39.522996+0000 mon.c (mon.1) 15 : audit [INF] from='osd.6 [v2:192.168.123.105:6808/1573740685,v1:192.168.123.105:6809/1573740685]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T14:21:41.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:40 vm05 bash[20070]: audit 2026-03-09T14:21:39.522996+0000 mon.c (mon.1) 15 : audit [INF] from='osd.6 [v2:192.168.123.105:6808/1573740685,v1:192.168.123.105:6809/1573740685]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T14:21:41.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:40 vm05 bash[20070]: cluster 2026-03-09T14:21:39.523423+0000 mon.a (mon.0) 567 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-09T14:21:41.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:40 vm05 bash[20070]: cluster 2026-03-09T14:21:39.523423+0000 mon.a (mon.0) 567 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-09T14:21:41.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:40 vm05 bash[20070]: audit 2026-03-09T14:21:39.523710+0000 mon.a (mon.0) 568 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:21:41.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:40 vm05 bash[20070]: audit 2026-03-09T14:21:39.523710+0000 mon.a (mon.0) 568 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:21:41.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:40 vm05 bash[20070]: audit 2026-03-09T14:21:39.524083+0000 mon.a (mon.0) 569 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T14:21:41.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:40 vm05 bash[20070]: audit 2026-03-09T14:21:39.524083+0000 mon.a (mon.0) 569 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T14:21:41.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:41 vm03 bash[17524]: audit 2026-03-09T14:21:40.523401+0000 mon.a (mon.0) 570 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-09T14:21:41.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:41 vm03 bash[17524]: audit 2026-03-09T14:21:40.523401+0000 mon.a (mon.0) 570 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-09T14:21:41.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:41 vm03 bash[17524]: cluster 2026-03-09T14:21:40.525874+0000 mon.a (mon.0) 571 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-09T14:21:41.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:41 vm03 bash[17524]: cluster 2026-03-09T14:21:40.525874+0000 mon.a (mon.0) 571 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-09T14:21:41.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:41 vm03 bash[17524]: audit 2026-03-09T14:21:40.526765+0000 mon.a (mon.0) 572 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:21:41.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:41 vm03 bash[17524]: audit 2026-03-09T14:21:40.526765+0000 mon.a (mon.0) 572 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:21:41.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:41 vm03 bash[17524]: audit 2026-03-09T14:21:40.533056+0000 mon.a (mon.0) 573 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:21:41.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:41 vm03 bash[17524]: audit 2026-03-09T14:21:40.533056+0000 mon.a (mon.0) 573 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:21:41.881 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:41 vm05 bash[20070]: audit 2026-03-09T14:21:40.523401+0000 mon.a (mon.0) 570 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-09T14:21:41.881 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:41 vm05 bash[20070]: audit 2026-03-09T14:21:40.523401+0000 mon.a (mon.0) 570 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-09T14:21:41.881 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:41 vm05 bash[20070]: cluster 2026-03-09T14:21:40.525874+0000 mon.a (mon.0) 571 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-09T14:21:41.881 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:41 vm05 bash[20070]: cluster 2026-03-09T14:21:40.525874+0000 mon.a (mon.0) 571 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-09T14:21:41.881 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:41 vm05 bash[20070]: audit 2026-03-09T14:21:40.526765+0000 mon.a (mon.0) 572 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:21:41.881 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:41 vm05 bash[20070]: audit 2026-03-09T14:21:40.526765+0000 mon.a (mon.0) 572 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:21:41.881 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:41 vm05 bash[20070]: audit 2026-03-09T14:21:40.533056+0000 mon.a (mon.0) 573 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:21:41.881 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:41 vm05 bash[20070]: audit 2026-03-09T14:21:40.533056+0000 mon.a (mon.0) 573 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:21:42.006 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:41 vm04 bash[19581]: audit 2026-03-09T14:21:40.523401+0000 mon.a (mon.0) 570 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-09T14:21:42.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:41 vm04 bash[19581]: audit 2026-03-09T14:21:40.523401+0000 mon.a (mon.0) 570 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-09T14:21:42.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:41 vm04 bash[19581]: cluster 2026-03-09T14:21:40.525874+0000 mon.a (mon.0) 571 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-09T14:21:42.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:41 vm04 bash[19581]: cluster 2026-03-09T14:21:40.525874+0000 mon.a (mon.0) 571 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-09T14:21:42.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:41 vm04 bash[19581]: audit 2026-03-09T14:21:40.526765+0000 mon.a (mon.0) 572 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:21:42.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:41 vm04 bash[19581]: audit 2026-03-09T14:21:40.526765+0000 mon.a (mon.0) 572 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:21:42.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:41 vm04 bash[19581]: audit 2026-03-09T14:21:40.533056+0000 mon.a (mon.0) 573 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:21:42.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:41 vm04 bash[19581]: audit 2026-03-09T14:21:40.533056+0000 mon.a (mon.0) 573 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:21:42.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:42 vm03 bash[17524]: cluster 2026-03-09T14:21:40.198014+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:21:42.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:42 vm03 bash[17524]: cluster 2026-03-09T14:21:40.198014+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:21:42.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:42 vm03 bash[17524]: cluster 2026-03-09T14:21:40.198078+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:21:42.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:42 vm03 bash[17524]: cluster 2026-03-09T14:21:40.198078+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:21:42.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:42 vm03 bash[17524]: cluster 2026-03-09T14:21:40.938414+0000 mgr.x (mgr.14150) 210 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:42.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:42 vm03 bash[17524]: cluster 2026-03-09T14:21:40.938414+0000 mgr.x (mgr.14150) 210 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:42.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:42 vm03 bash[17524]: audit 2026-03-09T14:21:41.530289+0000 mon.a (mon.0) 574 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:21:42.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:42 vm03 bash[17524]: audit 2026-03-09T14:21:41.530289+0000 mon.a (mon.0) 574 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:21:42.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:42 vm03 bash[17524]: cluster 2026-03-09T14:21:41.541441+0000 mon.a (mon.0) 575 : cluster [INF] osd.6 [v2:192.168.123.105:6808/1573740685,v1:192.168.123.105:6809/1573740685] boot 2026-03-09T14:21:42.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:42 vm03 bash[17524]: cluster 2026-03-09T14:21:41.541441+0000 mon.a (mon.0) 575 : cluster [INF] osd.6 [v2:192.168.123.105:6808/1573740685,v1:192.168.123.105:6809/1573740685] boot 2026-03-09T14:21:42.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:42 vm03 bash[17524]: cluster 2026-03-09T14:21:41.541540+0000 mon.a (mon.0) 576 : cluster [DBG] osdmap e43: 7 total, 7 up, 7 in 2026-03-09T14:21:42.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:42 vm03 bash[17524]: cluster 2026-03-09T14:21:41.541540+0000 mon.a (mon.0) 576 : cluster [DBG] osdmap e43: 7 total, 7 up, 7 in 2026-03-09T14:21:42.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:42 vm03 bash[17524]: audit 2026-03-09T14:21:41.541880+0000 mon.a (mon.0) 577 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:21:42.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:42 vm03 bash[17524]: audit 2026-03-09T14:21:41.541880+0000 mon.a (mon.0) 577 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:21:42.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:42 vm03 bash[17524]: audit 2026-03-09T14:21:42.069786+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:42.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:42 vm03 bash[17524]: audit 2026-03-09T14:21:42.069786+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:42.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:42 vm03 bash[17524]: audit 2026-03-09T14:21:42.073931+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:42.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:42 vm03 bash[17524]: audit 2026-03-09T14:21:42.073931+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:42.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:42 vm03 bash[17524]: audit 2026-03-09T14:21:42.458966+0000 mon.a (mon.0) 580 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:42.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:42 vm03 bash[17524]: audit 2026-03-09T14:21:42.458966+0000 mon.a (mon.0) 580 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:42.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:42 vm03 bash[17524]: audit 2026-03-09T14:21:42.459540+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:21:42.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:42 vm03 bash[17524]: audit 2026-03-09T14:21:42.459540+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:21:42.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:42 vm03 bash[17524]: audit 2026-03-09T14:21:42.464277+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:42.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:42 vm03 bash[17524]: audit 2026-03-09T14:21:42.464277+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:42.880 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:42 vm05 bash[20070]: cluster 2026-03-09T14:21:40.198014+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:21:42.880 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:42 vm05 bash[20070]: cluster 2026-03-09T14:21:40.198014+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:21:42.880 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:42 vm05 bash[20070]: cluster 2026-03-09T14:21:40.198078+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:21:42.880 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:42 vm05 bash[20070]: cluster 2026-03-09T14:21:40.198078+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:21:42.880 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:42 vm05 bash[20070]: cluster 2026-03-09T14:21:40.938414+0000 mgr.x (mgr.14150) 210 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:42.880 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:42 vm05 bash[20070]: cluster 2026-03-09T14:21:40.938414+0000 mgr.x (mgr.14150) 210 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:42.880 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:42 vm05 bash[20070]: audit 2026-03-09T14:21:41.530289+0000 mon.a (mon.0) 574 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:21:42.881 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:42 vm05 bash[20070]: audit 2026-03-09T14:21:41.530289+0000 mon.a (mon.0) 574 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:21:42.881 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:42 vm05 bash[20070]: cluster 2026-03-09T14:21:41.541441+0000 mon.a (mon.0) 575 : cluster [INF] osd.6 [v2:192.168.123.105:6808/1573740685,v1:192.168.123.105:6809/1573740685] boot 2026-03-09T14:21:42.881 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:42 vm05 bash[20070]: cluster 2026-03-09T14:21:41.541441+0000 mon.a (mon.0) 575 : cluster [INF] osd.6 [v2:192.168.123.105:6808/1573740685,v1:192.168.123.105:6809/1573740685] boot 2026-03-09T14:21:42.881 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:42 vm05 bash[20070]: cluster 2026-03-09T14:21:41.541540+0000 mon.a (mon.0) 576 : cluster [DBG] osdmap e43: 7 total, 7 up, 7 in 2026-03-09T14:21:42.881 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:42 vm05 bash[20070]: cluster 2026-03-09T14:21:41.541540+0000 mon.a (mon.0) 576 : cluster [DBG] osdmap e43: 7 total, 7 up, 7 in 2026-03-09T14:21:42.881 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:42 vm05 bash[20070]: audit 2026-03-09T14:21:41.541880+0000 mon.a (mon.0) 577 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:21:42.881 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:42 vm05 bash[20070]: audit 2026-03-09T14:21:41.541880+0000 mon.a (mon.0) 577 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:21:42.881 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:42 vm05 bash[20070]: audit 2026-03-09T14:21:42.069786+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:42.881 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:42 vm05 bash[20070]: audit 2026-03-09T14:21:42.069786+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:42.881 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:42 vm05 bash[20070]: audit 2026-03-09T14:21:42.073931+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:42.881 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:42 vm05 bash[20070]: audit 2026-03-09T14:21:42.073931+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:42.881 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:42 vm05 bash[20070]: audit 2026-03-09T14:21:42.458966+0000 mon.a (mon.0) 580 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:42.881 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:42 vm05 bash[20070]: audit 2026-03-09T14:21:42.458966+0000 mon.a (mon.0) 580 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:42.881 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:42 vm05 bash[20070]: audit 2026-03-09T14:21:42.459540+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:21:42.881 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:42 vm05 bash[20070]: audit 2026-03-09T14:21:42.459540+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:21:42.881 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:42 vm05 bash[20070]: audit 2026-03-09T14:21:42.464277+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:42.881 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:42 vm05 bash[20070]: audit 2026-03-09T14:21:42.464277+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:42.986 INFO:teuthology.orchestra.run.vm05.stdout:Created osd(s) 6 on host 'vm05' 2026-03-09T14:21:43.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:42 vm04 bash[19581]: cluster 2026-03-09T14:21:40.198014+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:21:43.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:42 vm04 bash[19581]: cluster 2026-03-09T14:21:40.198014+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:21:43.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:42 vm04 bash[19581]: cluster 2026-03-09T14:21:40.198078+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:21:43.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:42 vm04 bash[19581]: cluster 2026-03-09T14:21:40.198078+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:21:43.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:42 vm04 bash[19581]: cluster 2026-03-09T14:21:40.938414+0000 mgr.x (mgr.14150) 210 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:43.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:42 vm04 bash[19581]: cluster 2026-03-09T14:21:40.938414+0000 mgr.x (mgr.14150) 210 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:21:43.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:42 vm04 bash[19581]: audit 2026-03-09T14:21:41.530289+0000 mon.a (mon.0) 574 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:21:43.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:42 vm04 bash[19581]: audit 2026-03-09T14:21:41.530289+0000 mon.a (mon.0) 574 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:21:43.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:42 vm04 bash[19581]: cluster 2026-03-09T14:21:41.541441+0000 mon.a (mon.0) 575 : cluster [INF] osd.6 [v2:192.168.123.105:6808/1573740685,v1:192.168.123.105:6809/1573740685] boot 2026-03-09T14:21:43.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:42 vm04 bash[19581]: cluster 2026-03-09T14:21:41.541441+0000 mon.a (mon.0) 575 : cluster [INF] osd.6 [v2:192.168.123.105:6808/1573740685,v1:192.168.123.105:6809/1573740685] boot 2026-03-09T14:21:43.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:42 vm04 bash[19581]: cluster 2026-03-09T14:21:41.541540+0000 mon.a (mon.0) 576 : cluster [DBG] osdmap e43: 7 total, 7 up, 7 in 2026-03-09T14:21:43.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:42 vm04 bash[19581]: cluster 2026-03-09T14:21:41.541540+0000 mon.a (mon.0) 576 : cluster [DBG] osdmap e43: 7 total, 7 up, 7 in 2026-03-09T14:21:43.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:42 vm04 bash[19581]: audit 2026-03-09T14:21:41.541880+0000 mon.a (mon.0) 577 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:21:43.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:42 vm04 bash[19581]: audit 2026-03-09T14:21:41.541880+0000 mon.a (mon.0) 577 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:21:43.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:42 vm04 bash[19581]: audit 2026-03-09T14:21:42.069786+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:43.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:42 vm04 bash[19581]: audit 2026-03-09T14:21:42.069786+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:43.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:42 vm04 bash[19581]: audit 2026-03-09T14:21:42.073931+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:43.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:42 vm04 bash[19581]: audit 2026-03-09T14:21:42.073931+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:43.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:42 vm04 bash[19581]: audit 2026-03-09T14:21:42.458966+0000 mon.a (mon.0) 580 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:43.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:42 vm04 bash[19581]: audit 2026-03-09T14:21:42.458966+0000 mon.a (mon.0) 580 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:43.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:42 vm04 bash[19581]: audit 2026-03-09T14:21:42.459540+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:21:43.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:42 vm04 bash[19581]: audit 2026-03-09T14:21:42.459540+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:21:43.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:42 vm04 bash[19581]: audit 2026-03-09T14:21:42.464277+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:43.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:42 vm04 bash[19581]: audit 2026-03-09T14:21:42.464277+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:43.069 DEBUG:teuthology.orchestra.run.vm05:osd.6> sudo journalctl -f -n 0 -u ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@osd.6.service 2026-03-09T14:21:43.070 INFO:tasks.cephadm:Deploying osd.7 on vm05 with /dev/vdc... 2026-03-09T14:21:43.070 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- lvm zap /dev/vdc 2026-03-09T14:21:43.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:43 vm03 bash[17524]: audit 2026-03-09T14:21:42.975379+0000 mon.a (mon.0) 583 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:21:43.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:43 vm03 bash[17524]: audit 2026-03-09T14:21:42.975379+0000 mon.a (mon.0) 583 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:21:43.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:43 vm03 bash[17524]: audit 2026-03-09T14:21:42.979897+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:43.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:43 vm03 bash[17524]: audit 2026-03-09T14:21:42.979897+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:43.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:43 vm03 bash[17524]: audit 2026-03-09T14:21:42.984025+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:43.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:43 vm03 bash[17524]: audit 2026-03-09T14:21:42.984025+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:43.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:43 vm03 bash[17524]: cluster 2026-03-09T14:21:43.077481+0000 mon.a (mon.0) 586 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-09T14:21:43.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:43 vm03 bash[17524]: cluster 2026-03-09T14:21:43.077481+0000 mon.a (mon.0) 586 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-09T14:21:44.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:43 vm04 bash[19581]: audit 2026-03-09T14:21:42.975379+0000 mon.a (mon.0) 583 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:21:44.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:43 vm04 bash[19581]: audit 2026-03-09T14:21:42.975379+0000 mon.a (mon.0) 583 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:21:44.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:43 vm04 bash[19581]: audit 2026-03-09T14:21:42.979897+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:44.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:43 vm04 bash[19581]: audit 2026-03-09T14:21:42.979897+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:44.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:43 vm04 bash[19581]: audit 2026-03-09T14:21:42.984025+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:44.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:43 vm04 bash[19581]: audit 2026-03-09T14:21:42.984025+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:44.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:43 vm04 bash[19581]: cluster 2026-03-09T14:21:43.077481+0000 mon.a (mon.0) 586 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-09T14:21:44.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:43 vm04 bash[19581]: cluster 2026-03-09T14:21:43.077481+0000 mon.a (mon.0) 586 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-09T14:21:44.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:43 vm05 bash[20070]: audit 2026-03-09T14:21:42.975379+0000 mon.a (mon.0) 583 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:21:44.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:43 vm05 bash[20070]: audit 2026-03-09T14:21:42.975379+0000 mon.a (mon.0) 583 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:21:44.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:43 vm05 bash[20070]: audit 2026-03-09T14:21:42.979897+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:44.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:43 vm05 bash[20070]: audit 2026-03-09T14:21:42.979897+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:44.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:43 vm05 bash[20070]: audit 2026-03-09T14:21:42.984025+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:44.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:43 vm05 bash[20070]: audit 2026-03-09T14:21:42.984025+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:44.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:43 vm05 bash[20070]: cluster 2026-03-09T14:21:43.077481+0000 mon.a (mon.0) 586 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-09T14:21:44.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:43 vm05 bash[20070]: cluster 2026-03-09T14:21:43.077481+0000 mon.a (mon.0) 586 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-09T14:21:44.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:44 vm03 bash[17524]: cluster 2026-03-09T14:21:42.938667+0000 mgr.x (mgr.14150) 211 : cluster [DBG] pgmap v175: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:44.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:44 vm03 bash[17524]: cluster 2026-03-09T14:21:42.938667+0000 mgr.x (mgr.14150) 211 : cluster [DBG] pgmap v175: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:45.006 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:44 vm04 bash[19581]: cluster 2026-03-09T14:21:42.938667+0000 mgr.x (mgr.14150) 211 : cluster [DBG] pgmap v175: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:45.006 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:44 vm04 bash[19581]: cluster 2026-03-09T14:21:42.938667+0000 mgr.x (mgr.14150) 211 : cluster [DBG] pgmap v175: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:45.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:44 vm05 bash[20070]: cluster 2026-03-09T14:21:42.938667+0000 mgr.x (mgr.14150) 211 : cluster [DBG] pgmap v175: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:45.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:44 vm05 bash[20070]: cluster 2026-03-09T14:21:42.938667+0000 mgr.x (mgr.14150) 211 : cluster [DBG] pgmap v175: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:47.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:46 vm04 bash[19581]: cluster 2026-03-09T14:21:44.938900+0000 mgr.x (mgr.14150) 212 : cluster [DBG] pgmap v177: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:47.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:46 vm04 bash[19581]: cluster 2026-03-09T14:21:44.938900+0000 mgr.x (mgr.14150) 212 : cluster [DBG] pgmap v177: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:47.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:46 vm05 bash[20070]: cluster 2026-03-09T14:21:44.938900+0000 mgr.x (mgr.14150) 212 : cluster [DBG] pgmap v177: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:47.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:46 vm05 bash[20070]: cluster 2026-03-09T14:21:44.938900+0000 mgr.x (mgr.14150) 212 : cluster [DBG] pgmap v177: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:47.052 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:46 vm03 bash[17524]: cluster 2026-03-09T14:21:44.938900+0000 mgr.x (mgr.14150) 212 : cluster [DBG] pgmap v177: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:47.052 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:46 vm03 bash[17524]: cluster 2026-03-09T14:21:44.938900+0000 mgr.x (mgr.14150) 212 : cluster [DBG] pgmap v177: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:47.722 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.c/config 2026-03-09T14:21:47.812 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:47 vm05 bash[20070]: cluster 2026-03-09T14:21:46.939159+0000 mgr.x (mgr.14150) 213 : cluster [DBG] pgmap v178: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:47.812 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:47 vm05 bash[20070]: cluster 2026-03-09T14:21:46.939159+0000 mgr.x (mgr.14150) 213 : cluster [DBG] pgmap v178: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:48.006 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:47 vm04 bash[19581]: cluster 2026-03-09T14:21:46.939159+0000 mgr.x (mgr.14150) 213 : cluster [DBG] pgmap v178: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:48.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:47 vm04 bash[19581]: cluster 2026-03-09T14:21:46.939159+0000 mgr.x (mgr.14150) 213 : cluster [DBG] pgmap v178: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:48.052 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:47 vm03 bash[17524]: cluster 2026-03-09T14:21:46.939159+0000 mgr.x (mgr.14150) 213 : cluster [DBG] pgmap v178: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:48.052 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:47 vm03 bash[17524]: cluster 2026-03-09T14:21:46.939159+0000 mgr.x (mgr.14150) 213 : cluster [DBG] pgmap v178: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:48.521 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:21:48.535 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph orch daemon add osd vm05:/dev/vdc 2026-03-09T14:21:50.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:50 vm04 bash[19581]: cluster 2026-03-09T14:21:48.939394+0000 mgr.x (mgr.14150) 214 : cluster [DBG] pgmap v179: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:50.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:50 vm04 bash[19581]: cluster 2026-03-09T14:21:48.939394+0000 mgr.x (mgr.14150) 214 : cluster [DBG] pgmap v179: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:50.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:50 vm04 bash[19581]: cephadm 2026-03-09T14:21:49.202547+0000 mgr.x (mgr.14150) 215 : cephadm [INF] Detected new or changed devices on vm05 2026-03-09T14:21:50.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:50 vm04 bash[19581]: cephadm 2026-03-09T14:21:49.202547+0000 mgr.x (mgr.14150) 215 : cephadm [INF] Detected new or changed devices on vm05 2026-03-09T14:21:50.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:50 vm04 bash[19581]: audit 2026-03-09T14:21:49.207527+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:50.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:50 vm04 bash[19581]: audit 2026-03-09T14:21:49.207527+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:50.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:50 vm04 bash[19581]: audit 2026-03-09T14:21:49.210869+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:50.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:50 vm04 bash[19581]: audit 2026-03-09T14:21:49.210869+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:50.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:50 vm04 bash[19581]: audit 2026-03-09T14:21:49.211568+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:21:50.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:50 vm04 bash[19581]: audit 2026-03-09T14:21:49.211568+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:21:50.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:50 vm04 bash[19581]: audit 2026-03-09T14:21:49.212065+0000 mon.a (mon.0) 590 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:21:50.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:50 vm04 bash[19581]: audit 2026-03-09T14:21:49.212065+0000 mon.a (mon.0) 590 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:21:50.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:50 vm04 bash[19581]: cephadm 2026-03-09T14:21:49.212518+0000 mgr.x (mgr.14150) 216 : cephadm [INF] Adjusting osd_memory_target on vm05 to 2275M 2026-03-09T14:21:50.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:50 vm04 bash[19581]: cephadm 2026-03-09T14:21:49.212518+0000 mgr.x (mgr.14150) 216 : cephadm [INF] Adjusting osd_memory_target on vm05 to 2275M 2026-03-09T14:21:50.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:50 vm04 bash[19581]: audit 2026-03-09T14:21:49.216242+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:50.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:50 vm04 bash[19581]: audit 2026-03-09T14:21:49.216242+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:50.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:50 vm04 bash[19581]: audit 2026-03-09T14:21:49.217920+0000 mon.a (mon.0) 592 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:50.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:50 vm04 bash[19581]: audit 2026-03-09T14:21:49.217920+0000 mon.a (mon.0) 592 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:50.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:50 vm04 bash[19581]: audit 2026-03-09T14:21:49.218309+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:21:50.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:50 vm04 bash[19581]: audit 2026-03-09T14:21:49.218309+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:21:50.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:50 vm04 bash[19581]: audit 2026-03-09T14:21:49.222328+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:50.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:50 vm04 bash[19581]: audit 2026-03-09T14:21:49.222328+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:50.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:50 vm05 bash[20070]: cluster 2026-03-09T14:21:48.939394+0000 mgr.x (mgr.14150) 214 : cluster [DBG] pgmap v179: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:50.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:50 vm05 bash[20070]: cluster 2026-03-09T14:21:48.939394+0000 mgr.x (mgr.14150) 214 : cluster [DBG] pgmap v179: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:50.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:50 vm05 bash[20070]: cephadm 2026-03-09T14:21:49.202547+0000 mgr.x (mgr.14150) 215 : cephadm [INF] Detected new or changed devices on vm05 2026-03-09T14:21:50.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:50 vm05 bash[20070]: cephadm 2026-03-09T14:21:49.202547+0000 mgr.x (mgr.14150) 215 : cephadm [INF] Detected new or changed devices on vm05 2026-03-09T14:21:50.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:50 vm05 bash[20070]: audit 2026-03-09T14:21:49.207527+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:50.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:50 vm05 bash[20070]: audit 2026-03-09T14:21:49.207527+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:50.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:50 vm05 bash[20070]: audit 2026-03-09T14:21:49.210869+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:50.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:50 vm05 bash[20070]: audit 2026-03-09T14:21:49.210869+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:50.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:50 vm05 bash[20070]: audit 2026-03-09T14:21:49.211568+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:21:50.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:50 vm05 bash[20070]: audit 2026-03-09T14:21:49.211568+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:21:50.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:50 vm05 bash[20070]: audit 2026-03-09T14:21:49.212065+0000 mon.a (mon.0) 590 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:21:50.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:50 vm05 bash[20070]: audit 2026-03-09T14:21:49.212065+0000 mon.a (mon.0) 590 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:21:50.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:50 vm05 bash[20070]: cephadm 2026-03-09T14:21:49.212518+0000 mgr.x (mgr.14150) 216 : cephadm [INF] Adjusting osd_memory_target on vm05 to 2275M 2026-03-09T14:21:50.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:50 vm05 bash[20070]: cephadm 2026-03-09T14:21:49.212518+0000 mgr.x (mgr.14150) 216 : cephadm [INF] Adjusting osd_memory_target on vm05 to 2275M 2026-03-09T14:21:50.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:50 vm05 bash[20070]: audit 2026-03-09T14:21:49.216242+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:50.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:50 vm05 bash[20070]: audit 2026-03-09T14:21:49.216242+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:50.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:50 vm05 bash[20070]: audit 2026-03-09T14:21:49.217920+0000 mon.a (mon.0) 592 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:50.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:50 vm05 bash[20070]: audit 2026-03-09T14:21:49.217920+0000 mon.a (mon.0) 592 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:50.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:50 vm05 bash[20070]: audit 2026-03-09T14:21:49.218309+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:21:50.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:50 vm05 bash[20070]: audit 2026-03-09T14:21:49.218309+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:21:50.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:50 vm05 bash[20070]: audit 2026-03-09T14:21:49.222328+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:50.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:50 vm05 bash[20070]: audit 2026-03-09T14:21:49.222328+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:50.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:50 vm03 bash[17524]: cluster 2026-03-09T14:21:48.939394+0000 mgr.x (mgr.14150) 214 : cluster [DBG] pgmap v179: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:50.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:50 vm03 bash[17524]: cluster 2026-03-09T14:21:48.939394+0000 mgr.x (mgr.14150) 214 : cluster [DBG] pgmap v179: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:50.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:50 vm03 bash[17524]: cephadm 2026-03-09T14:21:49.202547+0000 mgr.x (mgr.14150) 215 : cephadm [INF] Detected new or changed devices on vm05 2026-03-09T14:21:50.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:50 vm03 bash[17524]: cephadm 2026-03-09T14:21:49.202547+0000 mgr.x (mgr.14150) 215 : cephadm [INF] Detected new or changed devices on vm05 2026-03-09T14:21:50.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:50 vm03 bash[17524]: audit 2026-03-09T14:21:49.207527+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:50.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:50 vm03 bash[17524]: audit 2026-03-09T14:21:49.207527+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:50.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:50 vm03 bash[17524]: audit 2026-03-09T14:21:49.210869+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:50.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:50 vm03 bash[17524]: audit 2026-03-09T14:21:49.210869+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:50.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:50 vm03 bash[17524]: audit 2026-03-09T14:21:49.211568+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:21:50.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:50 vm03 bash[17524]: audit 2026-03-09T14:21:49.211568+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:21:50.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:50 vm03 bash[17524]: audit 2026-03-09T14:21:49.212065+0000 mon.a (mon.0) 590 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:21:50.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:50 vm03 bash[17524]: audit 2026-03-09T14:21:49.212065+0000 mon.a (mon.0) 590 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:21:50.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:50 vm03 bash[17524]: cephadm 2026-03-09T14:21:49.212518+0000 mgr.x (mgr.14150) 216 : cephadm [INF] Adjusting osd_memory_target on vm05 to 2275M 2026-03-09T14:21:50.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:50 vm03 bash[17524]: cephadm 2026-03-09T14:21:49.212518+0000 mgr.x (mgr.14150) 216 : cephadm [INF] Adjusting osd_memory_target on vm05 to 2275M 2026-03-09T14:21:50.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:50 vm03 bash[17524]: audit 2026-03-09T14:21:49.216242+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:50.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:50 vm03 bash[17524]: audit 2026-03-09T14:21:49.216242+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:50.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:50 vm03 bash[17524]: audit 2026-03-09T14:21:49.217920+0000 mon.a (mon.0) 592 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:50.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:50 vm03 bash[17524]: audit 2026-03-09T14:21:49.217920+0000 mon.a (mon.0) 592 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:50.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:50 vm03 bash[17524]: audit 2026-03-09T14:21:49.218309+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:21:50.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:50 vm03 bash[17524]: audit 2026-03-09T14:21:49.218309+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:21:50.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:50 vm03 bash[17524]: audit 2026-03-09T14:21:49.222328+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:50.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:50 vm03 bash[17524]: audit 2026-03-09T14:21:49.222328+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:21:52.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:52 vm04 bash[19581]: cluster 2026-03-09T14:21:50.939616+0000 mgr.x (mgr.14150) 217 : cluster [DBG] pgmap v180: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:52.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:52 vm04 bash[19581]: cluster 2026-03-09T14:21:50.939616+0000 mgr.x (mgr.14150) 217 : cluster [DBG] pgmap v180: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:52.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:52 vm05 bash[20070]: cluster 2026-03-09T14:21:50.939616+0000 mgr.x (mgr.14150) 217 : cluster [DBG] pgmap v180: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:52.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:52 vm05 bash[20070]: cluster 2026-03-09T14:21:50.939616+0000 mgr.x (mgr.14150) 217 : cluster [DBG] pgmap v180: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:52.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:52 vm03 bash[17524]: cluster 2026-03-09T14:21:50.939616+0000 mgr.x (mgr.14150) 217 : cluster [DBG] pgmap v180: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:52.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:52 vm03 bash[17524]: cluster 2026-03-09T14:21:50.939616+0000 mgr.x (mgr.14150) 217 : cluster [DBG] pgmap v180: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:53.174 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.c/config 2026-03-09T14:21:54.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:54 vm04 bash[19581]: cluster 2026-03-09T14:21:52.939826+0000 mgr.x (mgr.14150) 218 : cluster [DBG] pgmap v181: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:54.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:54 vm04 bash[19581]: cluster 2026-03-09T14:21:52.939826+0000 mgr.x (mgr.14150) 218 : cluster [DBG] pgmap v181: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:54.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:54 vm04 bash[19581]: audit 2026-03-09T14:21:53.417578+0000 mgr.x (mgr.14150) 219 : audit [DBG] from='client.24271 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:21:54.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:54 vm04 bash[19581]: audit 2026-03-09T14:21:53.417578+0000 mgr.x (mgr.14150) 219 : audit [DBG] from='client.24271 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:21:54.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:54 vm04 bash[19581]: audit 2026-03-09T14:21:53.418922+0000 mon.a (mon.0) 595 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:21:54.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:54 vm04 bash[19581]: audit 2026-03-09T14:21:53.418922+0000 mon.a (mon.0) 595 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:21:54.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:54 vm04 bash[19581]: audit 2026-03-09T14:21:53.420194+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:21:54.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:54 vm04 bash[19581]: audit 2026-03-09T14:21:53.420194+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:21:54.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:54 vm04 bash[19581]: audit 2026-03-09T14:21:53.420634+0000 mon.a (mon.0) 597 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:54.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:54 vm04 bash[19581]: audit 2026-03-09T14:21:53.420634+0000 mon.a (mon.0) 597 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:54.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:54 vm05 bash[20070]: cluster 2026-03-09T14:21:52.939826+0000 mgr.x (mgr.14150) 218 : cluster [DBG] pgmap v181: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:54.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:54 vm05 bash[20070]: cluster 2026-03-09T14:21:52.939826+0000 mgr.x (mgr.14150) 218 : cluster [DBG] pgmap v181: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:54.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:54 vm05 bash[20070]: audit 2026-03-09T14:21:53.417578+0000 mgr.x (mgr.14150) 219 : audit [DBG] from='client.24271 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:21:54.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:54 vm05 bash[20070]: audit 2026-03-09T14:21:53.417578+0000 mgr.x (mgr.14150) 219 : audit [DBG] from='client.24271 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:21:54.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:54 vm05 bash[20070]: audit 2026-03-09T14:21:53.418922+0000 mon.a (mon.0) 595 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:21:54.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:54 vm05 bash[20070]: audit 2026-03-09T14:21:53.418922+0000 mon.a (mon.0) 595 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:21:54.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:54 vm05 bash[20070]: audit 2026-03-09T14:21:53.420194+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:21:54.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:54 vm05 bash[20070]: audit 2026-03-09T14:21:53.420194+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:21:54.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:54 vm05 bash[20070]: audit 2026-03-09T14:21:53.420634+0000 mon.a (mon.0) 597 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:54.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:54 vm05 bash[20070]: audit 2026-03-09T14:21:53.420634+0000 mon.a (mon.0) 597 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:54.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:54 vm03 bash[17524]: cluster 2026-03-09T14:21:52.939826+0000 mgr.x (mgr.14150) 218 : cluster [DBG] pgmap v181: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:54.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:54 vm03 bash[17524]: cluster 2026-03-09T14:21:52.939826+0000 mgr.x (mgr.14150) 218 : cluster [DBG] pgmap v181: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:54.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:54 vm03 bash[17524]: audit 2026-03-09T14:21:53.417578+0000 mgr.x (mgr.14150) 219 : audit [DBG] from='client.24271 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:21:54.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:54 vm03 bash[17524]: audit 2026-03-09T14:21:53.417578+0000 mgr.x (mgr.14150) 219 : audit [DBG] from='client.24271 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:21:54.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:54 vm03 bash[17524]: audit 2026-03-09T14:21:53.418922+0000 mon.a (mon.0) 595 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:21:54.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:54 vm03 bash[17524]: audit 2026-03-09T14:21:53.418922+0000 mon.a (mon.0) 595 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:21:54.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:54 vm03 bash[17524]: audit 2026-03-09T14:21:53.420194+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:21:54.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:54 vm03 bash[17524]: audit 2026-03-09T14:21:53.420194+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:21:54.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:54 vm03 bash[17524]: audit 2026-03-09T14:21:53.420634+0000 mon.a (mon.0) 597 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:54.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:54 vm03 bash[17524]: audit 2026-03-09T14:21:53.420634+0000 mon.a (mon.0) 597 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:21:56.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:56 vm04 bash[19581]: cluster 2026-03-09T14:21:54.940049+0000 mgr.x (mgr.14150) 220 : cluster [DBG] pgmap v182: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:56.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:56 vm04 bash[19581]: cluster 2026-03-09T14:21:54.940049+0000 mgr.x (mgr.14150) 220 : cluster [DBG] pgmap v182: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:56.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:56 vm05 bash[20070]: cluster 2026-03-09T14:21:54.940049+0000 mgr.x (mgr.14150) 220 : cluster [DBG] pgmap v182: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:56.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:56 vm05 bash[20070]: cluster 2026-03-09T14:21:54.940049+0000 mgr.x (mgr.14150) 220 : cluster [DBG] pgmap v182: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:56.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:56 vm03 bash[17524]: cluster 2026-03-09T14:21:54.940049+0000 mgr.x (mgr.14150) 220 : cluster [DBG] pgmap v182: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:56.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:56 vm03 bash[17524]: cluster 2026-03-09T14:21:54.940049+0000 mgr.x (mgr.14150) 220 : cluster [DBG] pgmap v182: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:58.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:58 vm04 bash[19581]: cluster 2026-03-09T14:21:56.940295+0000 mgr.x (mgr.14150) 221 : cluster [DBG] pgmap v183: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:58.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:58 vm04 bash[19581]: cluster 2026-03-09T14:21:56.940295+0000 mgr.x (mgr.14150) 221 : cluster [DBG] pgmap v183: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:58.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:58 vm04 bash[19581]: audit 2026-03-09T14:21:57.781637+0000 mon.c (mon.1) 16 : audit [INF] from='client.? 192.168.123.105:0/3456469166' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "377ff461-7194-48e3-8093-29ef296bd4de"}]: dispatch 2026-03-09T14:21:58.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:58 vm04 bash[19581]: audit 2026-03-09T14:21:57.781637+0000 mon.c (mon.1) 16 : audit [INF] from='client.? 192.168.123.105:0/3456469166' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "377ff461-7194-48e3-8093-29ef296bd4de"}]: dispatch 2026-03-09T14:21:58.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:58 vm04 bash[19581]: audit 2026-03-09T14:21:57.782917+0000 mon.a (mon.0) 598 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "377ff461-7194-48e3-8093-29ef296bd4de"}]: dispatch 2026-03-09T14:21:58.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:58 vm04 bash[19581]: audit 2026-03-09T14:21:57.782917+0000 mon.a (mon.0) 598 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "377ff461-7194-48e3-8093-29ef296bd4de"}]: dispatch 2026-03-09T14:21:58.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:58 vm04 bash[19581]: audit 2026-03-09T14:21:57.786216+0000 mon.a (mon.0) 599 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "377ff461-7194-48e3-8093-29ef296bd4de"}]': finished 2026-03-09T14:21:58.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:58 vm04 bash[19581]: audit 2026-03-09T14:21:57.786216+0000 mon.a (mon.0) 599 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "377ff461-7194-48e3-8093-29ef296bd4de"}]': finished 2026-03-09T14:21:58.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:58 vm04 bash[19581]: cluster 2026-03-09T14:21:57.788281+0000 mon.a (mon.0) 600 : cluster [DBG] osdmap e45: 8 total, 7 up, 8 in 2026-03-09T14:21:58.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:58 vm04 bash[19581]: cluster 2026-03-09T14:21:57.788281+0000 mon.a (mon.0) 600 : cluster [DBG] osdmap e45: 8 total, 7 up, 8 in 2026-03-09T14:21:58.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:58 vm04 bash[19581]: audit 2026-03-09T14:21:57.788371+0000 mon.a (mon.0) 601 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:21:58.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:58 vm04 bash[19581]: audit 2026-03-09T14:21:57.788371+0000 mon.a (mon.0) 601 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:21:58.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:58 vm05 bash[20070]: cluster 2026-03-09T14:21:56.940295+0000 mgr.x (mgr.14150) 221 : cluster [DBG] pgmap v183: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:58.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:58 vm05 bash[20070]: cluster 2026-03-09T14:21:56.940295+0000 mgr.x (mgr.14150) 221 : cluster [DBG] pgmap v183: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:58.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:58 vm05 bash[20070]: audit 2026-03-09T14:21:57.781637+0000 mon.c (mon.1) 16 : audit [INF] from='client.? 192.168.123.105:0/3456469166' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "377ff461-7194-48e3-8093-29ef296bd4de"}]: dispatch 2026-03-09T14:21:58.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:58 vm05 bash[20070]: audit 2026-03-09T14:21:57.781637+0000 mon.c (mon.1) 16 : audit [INF] from='client.? 192.168.123.105:0/3456469166' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "377ff461-7194-48e3-8093-29ef296bd4de"}]: dispatch 2026-03-09T14:21:58.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:58 vm05 bash[20070]: audit 2026-03-09T14:21:57.782917+0000 mon.a (mon.0) 598 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "377ff461-7194-48e3-8093-29ef296bd4de"}]: dispatch 2026-03-09T14:21:58.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:58 vm05 bash[20070]: audit 2026-03-09T14:21:57.782917+0000 mon.a (mon.0) 598 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "377ff461-7194-48e3-8093-29ef296bd4de"}]: dispatch 2026-03-09T14:21:58.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:58 vm05 bash[20070]: audit 2026-03-09T14:21:57.786216+0000 mon.a (mon.0) 599 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "377ff461-7194-48e3-8093-29ef296bd4de"}]': finished 2026-03-09T14:21:58.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:58 vm05 bash[20070]: audit 2026-03-09T14:21:57.786216+0000 mon.a (mon.0) 599 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "377ff461-7194-48e3-8093-29ef296bd4de"}]': finished 2026-03-09T14:21:58.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:58 vm05 bash[20070]: cluster 2026-03-09T14:21:57.788281+0000 mon.a (mon.0) 600 : cluster [DBG] osdmap e45: 8 total, 7 up, 8 in 2026-03-09T14:21:58.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:58 vm05 bash[20070]: cluster 2026-03-09T14:21:57.788281+0000 mon.a (mon.0) 600 : cluster [DBG] osdmap e45: 8 total, 7 up, 8 in 2026-03-09T14:21:58.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:58 vm05 bash[20070]: audit 2026-03-09T14:21:57.788371+0000 mon.a (mon.0) 601 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:21:58.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:58 vm05 bash[20070]: audit 2026-03-09T14:21:57.788371+0000 mon.a (mon.0) 601 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:21:58.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:58 vm03 bash[17524]: cluster 2026-03-09T14:21:56.940295+0000 mgr.x (mgr.14150) 221 : cluster [DBG] pgmap v183: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:58.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:58 vm03 bash[17524]: cluster 2026-03-09T14:21:56.940295+0000 mgr.x (mgr.14150) 221 : cluster [DBG] pgmap v183: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:21:58.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:58 vm03 bash[17524]: audit 2026-03-09T14:21:57.781637+0000 mon.c (mon.1) 16 : audit [INF] from='client.? 192.168.123.105:0/3456469166' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "377ff461-7194-48e3-8093-29ef296bd4de"}]: dispatch 2026-03-09T14:21:58.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:58 vm03 bash[17524]: audit 2026-03-09T14:21:57.781637+0000 mon.c (mon.1) 16 : audit [INF] from='client.? 192.168.123.105:0/3456469166' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "377ff461-7194-48e3-8093-29ef296bd4de"}]: dispatch 2026-03-09T14:21:58.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:58 vm03 bash[17524]: audit 2026-03-09T14:21:57.782917+0000 mon.a (mon.0) 598 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "377ff461-7194-48e3-8093-29ef296bd4de"}]: dispatch 2026-03-09T14:21:58.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:58 vm03 bash[17524]: audit 2026-03-09T14:21:57.782917+0000 mon.a (mon.0) 598 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "377ff461-7194-48e3-8093-29ef296bd4de"}]: dispatch 2026-03-09T14:21:58.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:58 vm03 bash[17524]: audit 2026-03-09T14:21:57.786216+0000 mon.a (mon.0) 599 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "377ff461-7194-48e3-8093-29ef296bd4de"}]': finished 2026-03-09T14:21:58.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:58 vm03 bash[17524]: audit 2026-03-09T14:21:57.786216+0000 mon.a (mon.0) 599 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "377ff461-7194-48e3-8093-29ef296bd4de"}]': finished 2026-03-09T14:21:58.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:58 vm03 bash[17524]: cluster 2026-03-09T14:21:57.788281+0000 mon.a (mon.0) 600 : cluster [DBG] osdmap e45: 8 total, 7 up, 8 in 2026-03-09T14:21:58.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:58 vm03 bash[17524]: cluster 2026-03-09T14:21:57.788281+0000 mon.a (mon.0) 600 : cluster [DBG] osdmap e45: 8 total, 7 up, 8 in 2026-03-09T14:21:58.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:58 vm03 bash[17524]: audit 2026-03-09T14:21:57.788371+0000 mon.a (mon.0) 601 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:21:58.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:58 vm03 bash[17524]: audit 2026-03-09T14:21:57.788371+0000 mon.a (mon.0) 601 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:21:59.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:59 vm04 bash[19581]: audit 2026-03-09T14:21:58.390298+0000 mon.c (mon.1) 17 : audit [DBG] from='client.? 192.168.123.105:0/3685685537' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:21:59.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:21:59 vm04 bash[19581]: audit 2026-03-09T14:21:58.390298+0000 mon.c (mon.1) 17 : audit [DBG] from='client.? 192.168.123.105:0/3685685537' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:21:59.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:59 vm05 bash[20070]: audit 2026-03-09T14:21:58.390298+0000 mon.c (mon.1) 17 : audit [DBG] from='client.? 192.168.123.105:0/3685685537' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:21:59.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:21:59 vm05 bash[20070]: audit 2026-03-09T14:21:58.390298+0000 mon.c (mon.1) 17 : audit [DBG] from='client.? 192.168.123.105:0/3685685537' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:21:59.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:59 vm03 bash[17524]: audit 2026-03-09T14:21:58.390298+0000 mon.c (mon.1) 17 : audit [DBG] from='client.? 192.168.123.105:0/3685685537' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:21:59.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:21:59 vm03 bash[17524]: audit 2026-03-09T14:21:58.390298+0000 mon.c (mon.1) 17 : audit [DBG] from='client.? 192.168.123.105:0/3685685537' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:22:00.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:00 vm04 bash[19581]: cluster 2026-03-09T14:21:58.940483+0000 mgr.x (mgr.14150) 222 : cluster [DBG] pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:00.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:00 vm04 bash[19581]: cluster 2026-03-09T14:21:58.940483+0000 mgr.x (mgr.14150) 222 : cluster [DBG] pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:00.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:00 vm05 bash[20070]: cluster 2026-03-09T14:21:58.940483+0000 mgr.x (mgr.14150) 222 : cluster [DBG] pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:00.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:00 vm05 bash[20070]: cluster 2026-03-09T14:21:58.940483+0000 mgr.x (mgr.14150) 222 : cluster [DBG] pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:00.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:00 vm03 bash[17524]: cluster 2026-03-09T14:21:58.940483+0000 mgr.x (mgr.14150) 222 : cluster [DBG] pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:00.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:00 vm03 bash[17524]: cluster 2026-03-09T14:21:58.940483+0000 mgr.x (mgr.14150) 222 : cluster [DBG] pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:02.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:02 vm04 bash[19581]: cluster 2026-03-09T14:22:00.940723+0000 mgr.x (mgr.14150) 223 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:02.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:02 vm04 bash[19581]: cluster 2026-03-09T14:22:00.940723+0000 mgr.x (mgr.14150) 223 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:02.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:02 vm05 bash[20070]: cluster 2026-03-09T14:22:00.940723+0000 mgr.x (mgr.14150) 223 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:02.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:02 vm05 bash[20070]: cluster 2026-03-09T14:22:00.940723+0000 mgr.x (mgr.14150) 223 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:02.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:02 vm03 bash[17524]: cluster 2026-03-09T14:22:00.940723+0000 mgr.x (mgr.14150) 223 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:02.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:02 vm03 bash[17524]: cluster 2026-03-09T14:22:00.940723+0000 mgr.x (mgr.14150) 223 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:04.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:04 vm04 bash[19581]: cluster 2026-03-09T14:22:02.941032+0000 mgr.x (mgr.14150) 224 : cluster [DBG] pgmap v187: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:04.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:04 vm04 bash[19581]: cluster 2026-03-09T14:22:02.941032+0000 mgr.x (mgr.14150) 224 : cluster [DBG] pgmap v187: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:04.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:04 vm05 bash[20070]: cluster 2026-03-09T14:22:02.941032+0000 mgr.x (mgr.14150) 224 : cluster [DBG] pgmap v187: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:04.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:04 vm05 bash[20070]: cluster 2026-03-09T14:22:02.941032+0000 mgr.x (mgr.14150) 224 : cluster [DBG] pgmap v187: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:04.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:04 vm03 bash[17524]: cluster 2026-03-09T14:22:02.941032+0000 mgr.x (mgr.14150) 224 : cluster [DBG] pgmap v187: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:04.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:04 vm03 bash[17524]: cluster 2026-03-09T14:22:02.941032+0000 mgr.x (mgr.14150) 224 : cluster [DBG] pgmap v187: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:06.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:06 vm05 bash[20070]: cluster 2026-03-09T14:22:04.941318+0000 mgr.x (mgr.14150) 225 : cluster [DBG] pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:06.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:06 vm05 bash[20070]: cluster 2026-03-09T14:22:04.941318+0000 mgr.x (mgr.14150) 225 : cluster [DBG] pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:06.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:06 vm03 bash[17524]: cluster 2026-03-09T14:22:04.941318+0000 mgr.x (mgr.14150) 225 : cluster [DBG] pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:06.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:06 vm03 bash[17524]: cluster 2026-03-09T14:22:04.941318+0000 mgr.x (mgr.14150) 225 : cluster [DBG] pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:06.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:06 vm04 bash[19581]: cluster 2026-03-09T14:22:04.941318+0000 mgr.x (mgr.14150) 225 : cluster [DBG] pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:06.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:06 vm04 bash[19581]: cluster 2026-03-09T14:22:04.941318+0000 mgr.x (mgr.14150) 225 : cluster [DBG] pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:07.455 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:07 vm05 bash[20070]: audit 2026-03-09T14:22:06.871293+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T14:22:07.455 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:07 vm05 bash[20070]: audit 2026-03-09T14:22:06.871293+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T14:22:07.455 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:07 vm05 bash[20070]: audit 2026-03-09T14:22:06.872041+0000 mon.a (mon.0) 603 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:07.455 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:07 vm05 bash[20070]: audit 2026-03-09T14:22:06.872041+0000 mon.a (mon.0) 603 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:07.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:07 vm03 bash[17524]: audit 2026-03-09T14:22:06.871293+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T14:22:07.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:07 vm03 bash[17524]: audit 2026-03-09T14:22:06.871293+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T14:22:07.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:07 vm03 bash[17524]: audit 2026-03-09T14:22:06.872041+0000 mon.a (mon.0) 603 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:07.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:07 vm03 bash[17524]: audit 2026-03-09T14:22:06.872041+0000 mon.a (mon.0) 603 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:07.752 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:07 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:22:07.752 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:22:07 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:22:07.752 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:22:07 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:22:07.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:07 vm04 bash[19581]: audit 2026-03-09T14:22:06.871293+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T14:22:07.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:07 vm04 bash[19581]: audit 2026-03-09T14:22:06.871293+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T14:22:07.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:07 vm04 bash[19581]: audit 2026-03-09T14:22:06.872041+0000 mon.a (mon.0) 603 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:07.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:07 vm04 bash[19581]: audit 2026-03-09T14:22:06.872041+0000 mon.a (mon.0) 603 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:08.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:07 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:22:08.008 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:22:07 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:22:08.008 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:22:07 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:22:08.511 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:08 vm05 bash[20070]: cephadm 2026-03-09T14:22:06.872554+0000 mgr.x (mgr.14150) 226 : cephadm [INF] Deploying daemon osd.7 on vm05 2026-03-09T14:22:08.512 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:08 vm05 bash[20070]: cephadm 2026-03-09T14:22:06.872554+0000 mgr.x (mgr.14150) 226 : cephadm [INF] Deploying daemon osd.7 on vm05 2026-03-09T14:22:08.512 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:08 vm05 bash[20070]: cluster 2026-03-09T14:22:06.941626+0000 mgr.x (mgr.14150) 227 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:08.512 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:08 vm05 bash[20070]: cluster 2026-03-09T14:22:06.941626+0000 mgr.x (mgr.14150) 227 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:08.512 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:08 vm05 bash[20070]: audit 2026-03-09T14:22:07.994857+0000 mon.a (mon.0) 604 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:22:08.512 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:08 vm05 bash[20070]: audit 2026-03-09T14:22:07.994857+0000 mon.a (mon.0) 604 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:22:08.512 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:08 vm05 bash[20070]: audit 2026-03-09T14:22:08.005642+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:08.512 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:08 vm05 bash[20070]: audit 2026-03-09T14:22:08.005642+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:08.512 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:08 vm05 bash[20070]: audit 2026-03-09T14:22:08.011388+0000 mon.a (mon.0) 606 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:08.512 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:08 vm05 bash[20070]: audit 2026-03-09T14:22:08.011388+0000 mon.a (mon.0) 606 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:08.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:08 vm03 bash[17524]: cephadm 2026-03-09T14:22:06.872554+0000 mgr.x (mgr.14150) 226 : cephadm [INF] Deploying daemon osd.7 on vm05 2026-03-09T14:22:08.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:08 vm03 bash[17524]: cephadm 2026-03-09T14:22:06.872554+0000 mgr.x (mgr.14150) 226 : cephadm [INF] Deploying daemon osd.7 on vm05 2026-03-09T14:22:08.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:08 vm03 bash[17524]: cluster 2026-03-09T14:22:06.941626+0000 mgr.x (mgr.14150) 227 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:08.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:08 vm03 bash[17524]: cluster 2026-03-09T14:22:06.941626+0000 mgr.x (mgr.14150) 227 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:08.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:08 vm03 bash[17524]: audit 2026-03-09T14:22:07.994857+0000 mon.a (mon.0) 604 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:22:08.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:08 vm03 bash[17524]: audit 2026-03-09T14:22:07.994857+0000 mon.a (mon.0) 604 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:22:08.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:08 vm03 bash[17524]: audit 2026-03-09T14:22:08.005642+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:08.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:08 vm03 bash[17524]: audit 2026-03-09T14:22:08.005642+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:08.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:08 vm03 bash[17524]: audit 2026-03-09T14:22:08.011388+0000 mon.a (mon.0) 606 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:08.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:08 vm03 bash[17524]: audit 2026-03-09T14:22:08.011388+0000 mon.a (mon.0) 606 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:08.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:08 vm04 bash[19581]: cephadm 2026-03-09T14:22:06.872554+0000 mgr.x (mgr.14150) 226 : cephadm [INF] Deploying daemon osd.7 on vm05 2026-03-09T14:22:08.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:08 vm04 bash[19581]: cephadm 2026-03-09T14:22:06.872554+0000 mgr.x (mgr.14150) 226 : cephadm [INF] Deploying daemon osd.7 on vm05 2026-03-09T14:22:08.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:08 vm04 bash[19581]: cluster 2026-03-09T14:22:06.941626+0000 mgr.x (mgr.14150) 227 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:08.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:08 vm04 bash[19581]: cluster 2026-03-09T14:22:06.941626+0000 mgr.x (mgr.14150) 227 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:08.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:08 vm04 bash[19581]: audit 2026-03-09T14:22:07.994857+0000 mon.a (mon.0) 604 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:22:08.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:08 vm04 bash[19581]: audit 2026-03-09T14:22:07.994857+0000 mon.a (mon.0) 604 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:22:08.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:08 vm04 bash[19581]: audit 2026-03-09T14:22:08.005642+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:08.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:08 vm04 bash[19581]: audit 2026-03-09T14:22:08.005642+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:08.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:08 vm04 bash[19581]: audit 2026-03-09T14:22:08.011388+0000 mon.a (mon.0) 606 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:08.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:08 vm04 bash[19581]: audit 2026-03-09T14:22:08.011388+0000 mon.a (mon.0) 606 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:10.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:10 vm03 bash[17524]: cluster 2026-03-09T14:22:08.941934+0000 mgr.x (mgr.14150) 228 : cluster [DBG] pgmap v190: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:10.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:10 vm03 bash[17524]: cluster 2026-03-09T14:22:08.941934+0000 mgr.x (mgr.14150) 228 : cluster [DBG] pgmap v190: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:10.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:10 vm04 bash[19581]: cluster 2026-03-09T14:22:08.941934+0000 mgr.x (mgr.14150) 228 : cluster [DBG] pgmap v190: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:10.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:10 vm04 bash[19581]: cluster 2026-03-09T14:22:08.941934+0000 mgr.x (mgr.14150) 228 : cluster [DBG] pgmap v190: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:10.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:10 vm05 bash[20070]: cluster 2026-03-09T14:22:08.941934+0000 mgr.x (mgr.14150) 228 : cluster [DBG] pgmap v190: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:10.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:10 vm05 bash[20070]: cluster 2026-03-09T14:22:08.941934+0000 mgr.x (mgr.14150) 228 : cluster [DBG] pgmap v190: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:12.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:12 vm03 bash[17524]: cluster 2026-03-09T14:22:10.942249+0000 mgr.x (mgr.14150) 229 : cluster [DBG] pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:12.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:12 vm03 bash[17524]: cluster 2026-03-09T14:22:10.942249+0000 mgr.x (mgr.14150) 229 : cluster [DBG] pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:12.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:12 vm03 bash[17524]: audit 2026-03-09T14:22:11.624166+0000 mon.c (mon.1) 18 : audit [INF] from='osd.7 [v2:192.168.123.105:6816/3385956239,v1:192.168.123.105:6817/3385956239]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T14:22:12.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:12 vm03 bash[17524]: audit 2026-03-09T14:22:11.624166+0000 mon.c (mon.1) 18 : audit [INF] from='osd.7 [v2:192.168.123.105:6816/3385956239,v1:192.168.123.105:6817/3385956239]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T14:22:12.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:12 vm03 bash[17524]: audit 2026-03-09T14:22:11.624998+0000 mon.a (mon.0) 607 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T14:22:12.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:12 vm03 bash[17524]: audit 2026-03-09T14:22:11.624998+0000 mon.a (mon.0) 607 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T14:22:12.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:12 vm04 bash[19581]: cluster 2026-03-09T14:22:10.942249+0000 mgr.x (mgr.14150) 229 : cluster [DBG] pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:12.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:12 vm04 bash[19581]: cluster 2026-03-09T14:22:10.942249+0000 mgr.x (mgr.14150) 229 : cluster [DBG] pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:12.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:12 vm04 bash[19581]: audit 2026-03-09T14:22:11.624166+0000 mon.c (mon.1) 18 : audit [INF] from='osd.7 [v2:192.168.123.105:6816/3385956239,v1:192.168.123.105:6817/3385956239]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T14:22:12.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:12 vm04 bash[19581]: audit 2026-03-09T14:22:11.624166+0000 mon.c (mon.1) 18 : audit [INF] from='osd.7 [v2:192.168.123.105:6816/3385956239,v1:192.168.123.105:6817/3385956239]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T14:22:12.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:12 vm04 bash[19581]: audit 2026-03-09T14:22:11.624998+0000 mon.a (mon.0) 607 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T14:22:12.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:12 vm04 bash[19581]: audit 2026-03-09T14:22:11.624998+0000 mon.a (mon.0) 607 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T14:22:12.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:12 vm05 bash[20070]: cluster 2026-03-09T14:22:10.942249+0000 mgr.x (mgr.14150) 229 : cluster [DBG] pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:12.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:12 vm05 bash[20070]: cluster 2026-03-09T14:22:10.942249+0000 mgr.x (mgr.14150) 229 : cluster [DBG] pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:12.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:12 vm05 bash[20070]: audit 2026-03-09T14:22:11.624166+0000 mon.c (mon.1) 18 : audit [INF] from='osd.7 [v2:192.168.123.105:6816/3385956239,v1:192.168.123.105:6817/3385956239]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T14:22:12.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:12 vm05 bash[20070]: audit 2026-03-09T14:22:11.624166+0000 mon.c (mon.1) 18 : audit [INF] from='osd.7 [v2:192.168.123.105:6816/3385956239,v1:192.168.123.105:6817/3385956239]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T14:22:12.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:12 vm05 bash[20070]: audit 2026-03-09T14:22:11.624998+0000 mon.a (mon.0) 607 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T14:22:12.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:12 vm05 bash[20070]: audit 2026-03-09T14:22:11.624998+0000 mon.a (mon.0) 607 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T14:22:13.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:13 vm03 bash[17524]: audit 2026-03-09T14:22:12.285796+0000 mon.a (mon.0) 608 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T14:22:13.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:13 vm03 bash[17524]: audit 2026-03-09T14:22:12.285796+0000 mon.a (mon.0) 608 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T14:22:13.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:13 vm03 bash[17524]: cluster 2026-03-09T14:22:12.288214+0000 mon.a (mon.0) 609 : cluster [DBG] osdmap e46: 8 total, 7 up, 8 in 2026-03-09T14:22:13.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:13 vm03 bash[17524]: cluster 2026-03-09T14:22:12.288214+0000 mon.a (mon.0) 609 : cluster [DBG] osdmap e46: 8 total, 7 up, 8 in 2026-03-09T14:22:13.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:13 vm03 bash[17524]: audit 2026-03-09T14:22:12.288815+0000 mon.a (mon.0) 610 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:22:13.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:13 vm03 bash[17524]: audit 2026-03-09T14:22:12.288815+0000 mon.a (mon.0) 610 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:22:13.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:13 vm03 bash[17524]: audit 2026-03-09T14:22:12.289855+0000 mon.c (mon.1) 19 : audit [INF] from='osd.7 [v2:192.168.123.105:6816/3385956239,v1:192.168.123.105:6817/3385956239]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T14:22:13.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:13 vm03 bash[17524]: audit 2026-03-09T14:22:12.289855+0000 mon.c (mon.1) 19 : audit [INF] from='osd.7 [v2:192.168.123.105:6816/3385956239,v1:192.168.123.105:6817/3385956239]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T14:22:13.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:13 vm03 bash[17524]: audit 2026-03-09T14:22:12.303058+0000 mon.a (mon.0) 611 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T14:22:13.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:13 vm03 bash[17524]: audit 2026-03-09T14:22:12.303058+0000 mon.a (mon.0) 611 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T14:22:13.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:13 vm04 bash[19581]: audit 2026-03-09T14:22:12.285796+0000 mon.a (mon.0) 608 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T14:22:13.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:13 vm04 bash[19581]: audit 2026-03-09T14:22:12.285796+0000 mon.a (mon.0) 608 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T14:22:13.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:13 vm04 bash[19581]: cluster 2026-03-09T14:22:12.288214+0000 mon.a (mon.0) 609 : cluster [DBG] osdmap e46: 8 total, 7 up, 8 in 2026-03-09T14:22:13.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:13 vm04 bash[19581]: cluster 2026-03-09T14:22:12.288214+0000 mon.a (mon.0) 609 : cluster [DBG] osdmap e46: 8 total, 7 up, 8 in 2026-03-09T14:22:13.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:13 vm04 bash[19581]: audit 2026-03-09T14:22:12.288815+0000 mon.a (mon.0) 610 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:22:13.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:13 vm04 bash[19581]: audit 2026-03-09T14:22:12.288815+0000 mon.a (mon.0) 610 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:22:13.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:13 vm04 bash[19581]: audit 2026-03-09T14:22:12.289855+0000 mon.c (mon.1) 19 : audit [INF] from='osd.7 [v2:192.168.123.105:6816/3385956239,v1:192.168.123.105:6817/3385956239]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T14:22:13.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:13 vm04 bash[19581]: audit 2026-03-09T14:22:12.289855+0000 mon.c (mon.1) 19 : audit [INF] from='osd.7 [v2:192.168.123.105:6816/3385956239,v1:192.168.123.105:6817/3385956239]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T14:22:13.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:13 vm04 bash[19581]: audit 2026-03-09T14:22:12.303058+0000 mon.a (mon.0) 611 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T14:22:13.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:13 vm04 bash[19581]: audit 2026-03-09T14:22:12.303058+0000 mon.a (mon.0) 611 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T14:22:13.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:13 vm05 bash[20070]: audit 2026-03-09T14:22:12.285796+0000 mon.a (mon.0) 608 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T14:22:13.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:13 vm05 bash[20070]: audit 2026-03-09T14:22:12.285796+0000 mon.a (mon.0) 608 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T14:22:13.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:13 vm05 bash[20070]: cluster 2026-03-09T14:22:12.288214+0000 mon.a (mon.0) 609 : cluster [DBG] osdmap e46: 8 total, 7 up, 8 in 2026-03-09T14:22:13.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:13 vm05 bash[20070]: cluster 2026-03-09T14:22:12.288214+0000 mon.a (mon.0) 609 : cluster [DBG] osdmap e46: 8 total, 7 up, 8 in 2026-03-09T14:22:13.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:13 vm05 bash[20070]: audit 2026-03-09T14:22:12.288815+0000 mon.a (mon.0) 610 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:22:13.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:13 vm05 bash[20070]: audit 2026-03-09T14:22:12.288815+0000 mon.a (mon.0) 610 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:22:13.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:13 vm05 bash[20070]: audit 2026-03-09T14:22:12.289855+0000 mon.c (mon.1) 19 : audit [INF] from='osd.7 [v2:192.168.123.105:6816/3385956239,v1:192.168.123.105:6817/3385956239]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T14:22:13.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:13 vm05 bash[20070]: audit 2026-03-09T14:22:12.289855+0000 mon.c (mon.1) 19 : audit [INF] from='osd.7 [v2:192.168.123.105:6816/3385956239,v1:192.168.123.105:6817/3385956239]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T14:22:13.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:13 vm05 bash[20070]: audit 2026-03-09T14:22:12.303058+0000 mon.a (mon.0) 611 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T14:22:13.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:13 vm05 bash[20070]: audit 2026-03-09T14:22:12.303058+0000 mon.a (mon.0) 611 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T14:22:14.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:14 vm04 bash[19581]: cluster 2026-03-09T14:22:12.942517+0000 mgr.x (mgr.14150) 230 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:14.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:14 vm04 bash[19581]: cluster 2026-03-09T14:22:12.942517+0000 mgr.x (mgr.14150) 230 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:14.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:14 vm04 bash[19581]: audit 2026-03-09T14:22:13.294068+0000 mon.a (mon.0) 612 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-09T14:22:14.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:14 vm04 bash[19581]: audit 2026-03-09T14:22:13.294068+0000 mon.a (mon.0) 612 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-09T14:22:14.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:14 vm04 bash[19581]: cluster 2026-03-09T14:22:13.296434+0000 mon.a (mon.0) 613 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-09T14:22:14.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:14 vm04 bash[19581]: cluster 2026-03-09T14:22:13.296434+0000 mon.a (mon.0) 613 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-09T14:22:14.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:14 vm04 bash[19581]: audit 2026-03-09T14:22:13.297347+0000 mon.a (mon.0) 614 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:22:14.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:14 vm04 bash[19581]: audit 2026-03-09T14:22:13.297347+0000 mon.a (mon.0) 614 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:22:14.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:14 vm04 bash[19581]: audit 2026-03-09T14:22:13.311499+0000 mon.a (mon.0) 615 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:22:14.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:14 vm04 bash[19581]: audit 2026-03-09T14:22:13.311499+0000 mon.a (mon.0) 615 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:22:14.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:14 vm04 bash[19581]: audit 2026-03-09T14:22:14.217370+0000 mon.a (mon.0) 616 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:14.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:14 vm04 bash[19581]: audit 2026-03-09T14:22:14.217370+0000 mon.a (mon.0) 616 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:14.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:14 vm04 bash[19581]: audit 2026-03-09T14:22:14.222244+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:14.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:14 vm04 bash[19581]: audit 2026-03-09T14:22:14.222244+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:14.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:14 vm04 bash[19581]: audit 2026-03-09T14:22:14.223127+0000 mon.a (mon.0) 618 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:14.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:14 vm04 bash[19581]: audit 2026-03-09T14:22:14.223127+0000 mon.a (mon.0) 618 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:14.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:14 vm04 bash[19581]: audit 2026-03-09T14:22:14.223627+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:22:14.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:14 vm04 bash[19581]: audit 2026-03-09T14:22:14.223627+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:22:14.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:14 vm04 bash[19581]: audit 2026-03-09T14:22:14.227540+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:14.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:14 vm04 bash[19581]: audit 2026-03-09T14:22:14.227540+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:14.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:14 vm05 bash[20070]: cluster 2026-03-09T14:22:12.942517+0000 mgr.x (mgr.14150) 230 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:14.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:14 vm05 bash[20070]: cluster 2026-03-09T14:22:12.942517+0000 mgr.x (mgr.14150) 230 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:14.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:14 vm05 bash[20070]: audit 2026-03-09T14:22:13.294068+0000 mon.a (mon.0) 612 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-09T14:22:14.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:14 vm05 bash[20070]: audit 2026-03-09T14:22:13.294068+0000 mon.a (mon.0) 612 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-09T14:22:14.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:14 vm05 bash[20070]: cluster 2026-03-09T14:22:13.296434+0000 mon.a (mon.0) 613 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-09T14:22:14.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:14 vm05 bash[20070]: cluster 2026-03-09T14:22:13.296434+0000 mon.a (mon.0) 613 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-09T14:22:14.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:14 vm05 bash[20070]: audit 2026-03-09T14:22:13.297347+0000 mon.a (mon.0) 614 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:22:14.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:14 vm05 bash[20070]: audit 2026-03-09T14:22:13.297347+0000 mon.a (mon.0) 614 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:22:14.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:14 vm05 bash[20070]: audit 2026-03-09T14:22:13.311499+0000 mon.a (mon.0) 615 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:22:14.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:14 vm05 bash[20070]: audit 2026-03-09T14:22:13.311499+0000 mon.a (mon.0) 615 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:22:14.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:14 vm05 bash[20070]: audit 2026-03-09T14:22:14.217370+0000 mon.a (mon.0) 616 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:14.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:14 vm05 bash[20070]: audit 2026-03-09T14:22:14.217370+0000 mon.a (mon.0) 616 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:14.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:14 vm05 bash[20070]: audit 2026-03-09T14:22:14.222244+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:14.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:14 vm05 bash[20070]: audit 2026-03-09T14:22:14.222244+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:14.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:14 vm05 bash[20070]: audit 2026-03-09T14:22:14.223127+0000 mon.a (mon.0) 618 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:14.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:14 vm05 bash[20070]: audit 2026-03-09T14:22:14.223127+0000 mon.a (mon.0) 618 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:14.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:14 vm05 bash[20070]: audit 2026-03-09T14:22:14.223627+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:22:14.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:14 vm05 bash[20070]: audit 2026-03-09T14:22:14.223627+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:22:14.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:14 vm05 bash[20070]: audit 2026-03-09T14:22:14.227540+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:14.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:14 vm05 bash[20070]: audit 2026-03-09T14:22:14.227540+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:14.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:14 vm03 bash[17524]: cluster 2026-03-09T14:22:12.942517+0000 mgr.x (mgr.14150) 230 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:14.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:14 vm03 bash[17524]: cluster 2026-03-09T14:22:12.942517+0000 mgr.x (mgr.14150) 230 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:14.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:14 vm03 bash[17524]: audit 2026-03-09T14:22:13.294068+0000 mon.a (mon.0) 612 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-09T14:22:14.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:14 vm03 bash[17524]: audit 2026-03-09T14:22:13.294068+0000 mon.a (mon.0) 612 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-09T14:22:14.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:14 vm03 bash[17524]: cluster 2026-03-09T14:22:13.296434+0000 mon.a (mon.0) 613 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-09T14:22:14.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:14 vm03 bash[17524]: cluster 2026-03-09T14:22:13.296434+0000 mon.a (mon.0) 613 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-09T14:22:14.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:14 vm03 bash[17524]: audit 2026-03-09T14:22:13.297347+0000 mon.a (mon.0) 614 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:22:14.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:14 vm03 bash[17524]: audit 2026-03-09T14:22:13.297347+0000 mon.a (mon.0) 614 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:22:14.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:14 vm03 bash[17524]: audit 2026-03-09T14:22:13.311499+0000 mon.a (mon.0) 615 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:22:14.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:14 vm03 bash[17524]: audit 2026-03-09T14:22:13.311499+0000 mon.a (mon.0) 615 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:22:14.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:14 vm03 bash[17524]: audit 2026-03-09T14:22:14.217370+0000 mon.a (mon.0) 616 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:14.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:14 vm03 bash[17524]: audit 2026-03-09T14:22:14.217370+0000 mon.a (mon.0) 616 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:14.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:14 vm03 bash[17524]: audit 2026-03-09T14:22:14.222244+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:14.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:14 vm03 bash[17524]: audit 2026-03-09T14:22:14.222244+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:14.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:14 vm03 bash[17524]: audit 2026-03-09T14:22:14.223127+0000 mon.a (mon.0) 618 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:14.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:14 vm03 bash[17524]: audit 2026-03-09T14:22:14.223127+0000 mon.a (mon.0) 618 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:14.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:14 vm03 bash[17524]: audit 2026-03-09T14:22:14.223627+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:22:14.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:14 vm03 bash[17524]: audit 2026-03-09T14:22:14.223627+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:22:14.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:14 vm03 bash[17524]: audit 2026-03-09T14:22:14.227540+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:14.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:14 vm03 bash[17524]: audit 2026-03-09T14:22:14.227540+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:15.134 INFO:teuthology.orchestra.run.vm05.stdout:Created osd(s) 7 on host 'vm05' 2026-03-09T14:22:15.227 DEBUG:teuthology.orchestra.run.vm05:osd.7> sudo journalctl -f -n 0 -u ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@osd.7.service 2026-03-09T14:22:15.228 INFO:tasks.cephadm:Waiting for 8 OSDs to come up... 2026-03-09T14:22:15.228 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph osd stat -f json 2026-03-09T14:22:15.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:15 vm05 bash[20070]: cluster 2026-03-09T14:22:12.612620+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:22:15.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:15 vm05 bash[20070]: cluster 2026-03-09T14:22:12.612620+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:22:15.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:15 vm05 bash[20070]: cluster 2026-03-09T14:22:12.612684+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:22:15.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:15 vm05 bash[20070]: cluster 2026-03-09T14:22:12.612684+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:22:15.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:15 vm05 bash[20070]: audit 2026-03-09T14:22:14.301772+0000 mon.a (mon.0) 621 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:22:15.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:15 vm05 bash[20070]: audit 2026-03-09T14:22:14.301772+0000 mon.a (mon.0) 621 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:22:15.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:15 vm05 bash[20070]: cluster 2026-03-09T14:22:14.305782+0000 mon.a (mon.0) 622 : cluster [INF] osd.7 [v2:192.168.123.105:6816/3385956239,v1:192.168.123.105:6817/3385956239] boot 2026-03-09T14:22:15.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:15 vm05 bash[20070]: cluster 2026-03-09T14:22:14.305782+0000 mon.a (mon.0) 622 : cluster [INF] osd.7 [v2:192.168.123.105:6816/3385956239,v1:192.168.123.105:6817/3385956239] boot 2026-03-09T14:22:15.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:15 vm05 bash[20070]: cluster 2026-03-09T14:22:14.305849+0000 mon.a (mon.0) 623 : cluster [DBG] osdmap e48: 8 total, 8 up, 8 in 2026-03-09T14:22:15.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:15 vm05 bash[20070]: cluster 2026-03-09T14:22:14.305849+0000 mon.a (mon.0) 623 : cluster [DBG] osdmap e48: 8 total, 8 up, 8 in 2026-03-09T14:22:15.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:15 vm05 bash[20070]: audit 2026-03-09T14:22:14.306558+0000 mon.a (mon.0) 624 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:22:15.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:15 vm05 bash[20070]: audit 2026-03-09T14:22:14.306558+0000 mon.a (mon.0) 624 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:22:15.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:15 vm05 bash[20070]: audit 2026-03-09T14:22:15.121154+0000 mon.a (mon.0) 625 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:22:15.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:15 vm05 bash[20070]: audit 2026-03-09T14:22:15.121154+0000 mon.a (mon.0) 625 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:22:15.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:15 vm05 bash[20070]: audit 2026-03-09T14:22:15.125790+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:15.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:15 vm05 bash[20070]: audit 2026-03-09T14:22:15.125790+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:15.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:15 vm05 bash[20070]: audit 2026-03-09T14:22:15.130090+0000 mon.a (mon.0) 627 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:15.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:15 vm05 bash[20070]: audit 2026-03-09T14:22:15.130090+0000 mon.a (mon.0) 627 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:15.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:15 vm03 bash[17524]: cluster 2026-03-09T14:22:12.612620+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:22:15.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:15 vm03 bash[17524]: cluster 2026-03-09T14:22:12.612620+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:22:15.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:15 vm03 bash[17524]: cluster 2026-03-09T14:22:12.612684+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:22:15.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:15 vm03 bash[17524]: cluster 2026-03-09T14:22:12.612684+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:22:15.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:15 vm03 bash[17524]: audit 2026-03-09T14:22:14.301772+0000 mon.a (mon.0) 621 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:22:15.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:15 vm03 bash[17524]: audit 2026-03-09T14:22:14.301772+0000 mon.a (mon.0) 621 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:22:15.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:15 vm03 bash[17524]: cluster 2026-03-09T14:22:14.305782+0000 mon.a (mon.0) 622 : cluster [INF] osd.7 [v2:192.168.123.105:6816/3385956239,v1:192.168.123.105:6817/3385956239] boot 2026-03-09T14:22:15.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:15 vm03 bash[17524]: cluster 2026-03-09T14:22:14.305782+0000 mon.a (mon.0) 622 : cluster [INF] osd.7 [v2:192.168.123.105:6816/3385956239,v1:192.168.123.105:6817/3385956239] boot 2026-03-09T14:22:15.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:15 vm03 bash[17524]: cluster 2026-03-09T14:22:14.305849+0000 mon.a (mon.0) 623 : cluster [DBG] osdmap e48: 8 total, 8 up, 8 in 2026-03-09T14:22:15.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:15 vm03 bash[17524]: cluster 2026-03-09T14:22:14.305849+0000 mon.a (mon.0) 623 : cluster [DBG] osdmap e48: 8 total, 8 up, 8 in 2026-03-09T14:22:15.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:15 vm03 bash[17524]: audit 2026-03-09T14:22:14.306558+0000 mon.a (mon.0) 624 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:22:15.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:15 vm03 bash[17524]: audit 2026-03-09T14:22:14.306558+0000 mon.a (mon.0) 624 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:22:15.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:15 vm03 bash[17524]: audit 2026-03-09T14:22:15.121154+0000 mon.a (mon.0) 625 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:22:15.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:15 vm03 bash[17524]: audit 2026-03-09T14:22:15.121154+0000 mon.a (mon.0) 625 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:22:15.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:15 vm03 bash[17524]: audit 2026-03-09T14:22:15.125790+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:15.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:15 vm03 bash[17524]: audit 2026-03-09T14:22:15.125790+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:15.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:15 vm03 bash[17524]: audit 2026-03-09T14:22:15.130090+0000 mon.a (mon.0) 627 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:15.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:15 vm03 bash[17524]: audit 2026-03-09T14:22:15.130090+0000 mon.a (mon.0) 627 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:15.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:15 vm04 bash[19581]: cluster 2026-03-09T14:22:12.612620+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:22:15.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:15 vm04 bash[19581]: cluster 2026-03-09T14:22:12.612620+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:22:15.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:15 vm04 bash[19581]: cluster 2026-03-09T14:22:12.612684+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:22:15.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:15 vm04 bash[19581]: cluster 2026-03-09T14:22:12.612684+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:22:15.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:15 vm04 bash[19581]: audit 2026-03-09T14:22:14.301772+0000 mon.a (mon.0) 621 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:22:15.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:15 vm04 bash[19581]: audit 2026-03-09T14:22:14.301772+0000 mon.a (mon.0) 621 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:22:15.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:15 vm04 bash[19581]: cluster 2026-03-09T14:22:14.305782+0000 mon.a (mon.0) 622 : cluster [INF] osd.7 [v2:192.168.123.105:6816/3385956239,v1:192.168.123.105:6817/3385956239] boot 2026-03-09T14:22:15.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:15 vm04 bash[19581]: cluster 2026-03-09T14:22:14.305782+0000 mon.a (mon.0) 622 : cluster [INF] osd.7 [v2:192.168.123.105:6816/3385956239,v1:192.168.123.105:6817/3385956239] boot 2026-03-09T14:22:15.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:15 vm04 bash[19581]: cluster 2026-03-09T14:22:14.305849+0000 mon.a (mon.0) 623 : cluster [DBG] osdmap e48: 8 total, 8 up, 8 in 2026-03-09T14:22:15.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:15 vm04 bash[19581]: cluster 2026-03-09T14:22:14.305849+0000 mon.a (mon.0) 623 : cluster [DBG] osdmap e48: 8 total, 8 up, 8 in 2026-03-09T14:22:15.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:15 vm04 bash[19581]: audit 2026-03-09T14:22:14.306558+0000 mon.a (mon.0) 624 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:22:15.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:15 vm04 bash[19581]: audit 2026-03-09T14:22:14.306558+0000 mon.a (mon.0) 624 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:22:15.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:15 vm04 bash[19581]: audit 2026-03-09T14:22:15.121154+0000 mon.a (mon.0) 625 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:22:15.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:15 vm04 bash[19581]: audit 2026-03-09T14:22:15.121154+0000 mon.a (mon.0) 625 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:22:15.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:15 vm04 bash[19581]: audit 2026-03-09T14:22:15.125790+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:15.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:15 vm04 bash[19581]: audit 2026-03-09T14:22:15.125790+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:15.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:15 vm04 bash[19581]: audit 2026-03-09T14:22:15.130090+0000 mon.a (mon.0) 627 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:15.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:15 vm04 bash[19581]: audit 2026-03-09T14:22:15.130090+0000 mon.a (mon.0) 627 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:16.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:16 vm04 bash[19581]: cluster 2026-03-09T14:22:14.942759+0000 mgr.x (mgr.14150) 231 : cluster [DBG] pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:16.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:16 vm04 bash[19581]: cluster 2026-03-09T14:22:14.942759+0000 mgr.x (mgr.14150) 231 : cluster [DBG] pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:16.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:16 vm04 bash[19581]: cluster 2026-03-09T14:22:15.322847+0000 mon.a (mon.0) 628 : cluster [DBG] osdmap e49: 8 total, 8 up, 8 in 2026-03-09T14:22:16.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:16 vm04 bash[19581]: cluster 2026-03-09T14:22:15.322847+0000 mon.a (mon.0) 628 : cluster [DBG] osdmap e49: 8 total, 8 up, 8 in 2026-03-09T14:22:16.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:16 vm05 bash[20070]: cluster 2026-03-09T14:22:14.942759+0000 mgr.x (mgr.14150) 231 : cluster [DBG] pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:16.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:16 vm05 bash[20070]: cluster 2026-03-09T14:22:14.942759+0000 mgr.x (mgr.14150) 231 : cluster [DBG] pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:16.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:16 vm05 bash[20070]: cluster 2026-03-09T14:22:15.322847+0000 mon.a (mon.0) 628 : cluster [DBG] osdmap e49: 8 total, 8 up, 8 in 2026-03-09T14:22:16.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:16 vm05 bash[20070]: cluster 2026-03-09T14:22:15.322847+0000 mon.a (mon.0) 628 : cluster [DBG] osdmap e49: 8 total, 8 up, 8 in 2026-03-09T14:22:16.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:16 vm03 bash[17524]: cluster 2026-03-09T14:22:14.942759+0000 mgr.x (mgr.14150) 231 : cluster [DBG] pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:16.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:16 vm03 bash[17524]: cluster 2026-03-09T14:22:14.942759+0000 mgr.x (mgr.14150) 231 : cluster [DBG] pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:22:16.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:16 vm03 bash[17524]: cluster 2026-03-09T14:22:15.322847+0000 mon.a (mon.0) 628 : cluster [DBG] osdmap e49: 8 total, 8 up, 8 in 2026-03-09T14:22:16.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:16 vm03 bash[17524]: cluster 2026-03-09T14:22:15.322847+0000 mon.a (mon.0) 628 : cluster [DBG] osdmap e49: 8 total, 8 up, 8 in 2026-03-09T14:22:18.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:18 vm04 bash[19581]: cluster 2026-03-09T14:22:16.943030+0000 mgr.x (mgr.14150) 232 : cluster [DBG] pgmap v198: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:18.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:18 vm04 bash[19581]: cluster 2026-03-09T14:22:16.943030+0000 mgr.x (mgr.14150) 232 : cluster [DBG] pgmap v198: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:18.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:18 vm05 bash[20070]: cluster 2026-03-09T14:22:16.943030+0000 mgr.x (mgr.14150) 232 : cluster [DBG] pgmap v198: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:18.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:18 vm05 bash[20070]: cluster 2026-03-09T14:22:16.943030+0000 mgr.x (mgr.14150) 232 : cluster [DBG] pgmap v198: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:18.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:18 vm03 bash[17524]: cluster 2026-03-09T14:22:16.943030+0000 mgr.x (mgr.14150) 232 : cluster [DBG] pgmap v198: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:18.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:18 vm03 bash[17524]: cluster 2026-03-09T14:22:16.943030+0000 mgr.x (mgr.14150) 232 : cluster [DBG] pgmap v198: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:19.859 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:22:20.121 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:22:20.176 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":49,"num_osds":8,"num_up_osds":8,"osd_up_since":1773066134,"num_in_osds":8,"osd_in_since":1773066117,"num_remapped_pgs":0} 2026-03-09T14:22:20.176 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph osd dump --format=json 2026-03-09T14:22:20.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:20 vm03 bash[17524]: cluster 2026-03-09T14:22:18.943365+0000 mgr.x (mgr.14150) 233 : cluster [DBG] pgmap v199: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:20.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:20 vm03 bash[17524]: cluster 2026-03-09T14:22:18.943365+0000 mgr.x (mgr.14150) 233 : cluster [DBG] pgmap v199: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:20.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:20 vm03 bash[17524]: audit 2026-03-09T14:22:20.121119+0000 mon.a (mon.0) 629 : audit [DBG] from='client.? 192.168.123.103:0/2838598251' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T14:22:20.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:20 vm03 bash[17524]: audit 2026-03-09T14:22:20.121119+0000 mon.a (mon.0) 629 : audit [DBG] from='client.? 192.168.123.103:0/2838598251' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T14:22:20.730 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:20 vm05 bash[20070]: cluster 2026-03-09T14:22:18.943365+0000 mgr.x (mgr.14150) 233 : cluster [DBG] pgmap v199: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:20.730 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:20 vm05 bash[20070]: cluster 2026-03-09T14:22:18.943365+0000 mgr.x (mgr.14150) 233 : cluster [DBG] pgmap v199: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:20.730 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:20 vm05 bash[20070]: audit 2026-03-09T14:22:20.121119+0000 mon.a (mon.0) 629 : audit [DBG] from='client.? 192.168.123.103:0/2838598251' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T14:22:20.730 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:20 vm05 bash[20070]: audit 2026-03-09T14:22:20.121119+0000 mon.a (mon.0) 629 : audit [DBG] from='client.? 192.168.123.103:0/2838598251' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T14:22:20.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:20 vm04 bash[19581]: cluster 2026-03-09T14:22:18.943365+0000 mgr.x (mgr.14150) 233 : cluster [DBG] pgmap v199: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:20.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:20 vm04 bash[19581]: cluster 2026-03-09T14:22:18.943365+0000 mgr.x (mgr.14150) 233 : cluster [DBG] pgmap v199: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:20.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:20 vm04 bash[19581]: audit 2026-03-09T14:22:20.121119+0000 mon.a (mon.0) 629 : audit [DBG] from='client.? 192.168.123.103:0/2838598251' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T14:22:20.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:20 vm04 bash[19581]: audit 2026-03-09T14:22:20.121119+0000 mon.a (mon.0) 629 : audit [DBG] from='client.? 192.168.123.103:0/2838598251' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T14:22:22.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:21 vm03 bash[17524]: cephadm 2026-03-09T14:22:20.782710+0000 mgr.x (mgr.14150) 234 : cephadm [INF] Detected new or changed devices on vm05 2026-03-09T14:22:22.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:21 vm03 bash[17524]: cephadm 2026-03-09T14:22:20.782710+0000 mgr.x (mgr.14150) 234 : cephadm [INF] Detected new or changed devices on vm05 2026-03-09T14:22:22.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:21 vm03 bash[17524]: audit 2026-03-09T14:22:20.789612+0000 mon.a (mon.0) 630 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:22.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:21 vm03 bash[17524]: audit 2026-03-09T14:22:20.789612+0000 mon.a (mon.0) 630 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:22.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:21 vm03 bash[17524]: audit 2026-03-09T14:22:20.793436+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:22.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:21 vm03 bash[17524]: audit 2026-03-09T14:22:20.793436+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:22.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:21 vm03 bash[17524]: audit 2026-03-09T14:22:20.794097+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:22:22.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:21 vm03 bash[17524]: audit 2026-03-09T14:22:20.794097+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:22:22.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:21 vm03 bash[17524]: audit 2026-03-09T14:22:20.794524+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:22:22.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:21 vm03 bash[17524]: audit 2026-03-09T14:22:20.794524+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:22:22.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:21 vm03 bash[17524]: audit 2026-03-09T14:22:20.794931+0000 mon.a (mon.0) 634 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:22:22.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:21 vm03 bash[17524]: audit 2026-03-09T14:22:20.794931+0000 mon.a (mon.0) 634 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:22:22.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:21 vm03 bash[17524]: cephadm 2026-03-09T14:22:20.795240+0000 mgr.x (mgr.14150) 235 : cephadm [INF] Adjusting osd_memory_target on vm05 to 1517M 2026-03-09T14:22:22.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:21 vm03 bash[17524]: cephadm 2026-03-09T14:22:20.795240+0000 mgr.x (mgr.14150) 235 : cephadm [INF] Adjusting osd_memory_target on vm05 to 1517M 2026-03-09T14:22:22.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:21 vm03 bash[17524]: audit 2026-03-09T14:22:20.798098+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:22.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:21 vm03 bash[17524]: audit 2026-03-09T14:22:20.798098+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:22.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:21 vm03 bash[17524]: audit 2026-03-09T14:22:20.799401+0000 mon.a (mon.0) 636 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:22.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:21 vm03 bash[17524]: audit 2026-03-09T14:22:20.799401+0000 mon.a (mon.0) 636 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:22.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:21 vm03 bash[17524]: audit 2026-03-09T14:22:20.799810+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:22:22.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:21 vm03 bash[17524]: audit 2026-03-09T14:22:20.799810+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:22:22.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:21 vm03 bash[17524]: audit 2026-03-09T14:22:20.803254+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:22.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:21 vm03 bash[17524]: audit 2026-03-09T14:22:20.803254+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:22.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:21 vm03 bash[17524]: cluster 2026-03-09T14:22:20.943641+0000 mgr.x (mgr.14150) 236 : cluster [DBG] pgmap v200: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:22.054 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:21 vm03 bash[17524]: cluster 2026-03-09T14:22:20.943641+0000 mgr.x (mgr.14150) 236 : cluster [DBG] pgmap v200: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:22.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:21 vm04 bash[19581]: cephadm 2026-03-09T14:22:20.782710+0000 mgr.x (mgr.14150) 234 : cephadm [INF] Detected new or changed devices on vm05 2026-03-09T14:22:22.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:21 vm04 bash[19581]: cephadm 2026-03-09T14:22:20.782710+0000 mgr.x (mgr.14150) 234 : cephadm [INF] Detected new or changed devices on vm05 2026-03-09T14:22:22.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:21 vm04 bash[19581]: audit 2026-03-09T14:22:20.789612+0000 mon.a (mon.0) 630 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:22.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:21 vm04 bash[19581]: audit 2026-03-09T14:22:20.789612+0000 mon.a (mon.0) 630 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:22.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:21 vm04 bash[19581]: audit 2026-03-09T14:22:20.793436+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:22.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:21 vm04 bash[19581]: audit 2026-03-09T14:22:20.793436+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:22.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:21 vm04 bash[19581]: audit 2026-03-09T14:22:20.794097+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:22:22.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:21 vm04 bash[19581]: audit 2026-03-09T14:22:20.794097+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:22:22.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:21 vm04 bash[19581]: audit 2026-03-09T14:22:20.794524+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:22:22.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:21 vm04 bash[19581]: audit 2026-03-09T14:22:20.794524+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:22:22.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:21 vm04 bash[19581]: audit 2026-03-09T14:22:20.794931+0000 mon.a (mon.0) 634 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:22:22.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:21 vm04 bash[19581]: audit 2026-03-09T14:22:20.794931+0000 mon.a (mon.0) 634 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:22:22.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:21 vm04 bash[19581]: cephadm 2026-03-09T14:22:20.795240+0000 mgr.x (mgr.14150) 235 : cephadm [INF] Adjusting osd_memory_target on vm05 to 1517M 2026-03-09T14:22:22.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:21 vm04 bash[19581]: cephadm 2026-03-09T14:22:20.795240+0000 mgr.x (mgr.14150) 235 : cephadm [INF] Adjusting osd_memory_target on vm05 to 1517M 2026-03-09T14:22:22.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:21 vm04 bash[19581]: audit 2026-03-09T14:22:20.798098+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:22.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:21 vm04 bash[19581]: audit 2026-03-09T14:22:20.798098+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:22.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:21 vm04 bash[19581]: audit 2026-03-09T14:22:20.799401+0000 mon.a (mon.0) 636 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:22.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:21 vm04 bash[19581]: audit 2026-03-09T14:22:20.799401+0000 mon.a (mon.0) 636 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:22.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:21 vm04 bash[19581]: audit 2026-03-09T14:22:20.799810+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:22:22.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:21 vm04 bash[19581]: audit 2026-03-09T14:22:20.799810+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:22:22.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:21 vm04 bash[19581]: audit 2026-03-09T14:22:20.803254+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:22.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:21 vm04 bash[19581]: audit 2026-03-09T14:22:20.803254+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:22.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:21 vm04 bash[19581]: cluster 2026-03-09T14:22:20.943641+0000 mgr.x (mgr.14150) 236 : cluster [DBG] pgmap v200: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:22.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:21 vm04 bash[19581]: cluster 2026-03-09T14:22:20.943641+0000 mgr.x (mgr.14150) 236 : cluster [DBG] pgmap v200: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:22.258 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:21 vm05 bash[20070]: cephadm 2026-03-09T14:22:20.782710+0000 mgr.x (mgr.14150) 234 : cephadm [INF] Detected new or changed devices on vm05 2026-03-09T14:22:22.258 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:21 vm05 bash[20070]: cephadm 2026-03-09T14:22:20.782710+0000 mgr.x (mgr.14150) 234 : cephadm [INF] Detected new or changed devices on vm05 2026-03-09T14:22:22.259 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:21 vm05 bash[20070]: audit 2026-03-09T14:22:20.789612+0000 mon.a (mon.0) 630 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:22.259 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:21 vm05 bash[20070]: audit 2026-03-09T14:22:20.789612+0000 mon.a (mon.0) 630 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:22.259 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:21 vm05 bash[20070]: audit 2026-03-09T14:22:20.793436+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:22.259 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:21 vm05 bash[20070]: audit 2026-03-09T14:22:20.793436+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:22.259 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:21 vm05 bash[20070]: audit 2026-03-09T14:22:20.794097+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:22:22.259 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:21 vm05 bash[20070]: audit 2026-03-09T14:22:20.794097+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:22:22.259 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:21 vm05 bash[20070]: audit 2026-03-09T14:22:20.794524+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:22:22.259 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:21 vm05 bash[20070]: audit 2026-03-09T14:22:20.794524+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:22:22.259 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:21 vm05 bash[20070]: audit 2026-03-09T14:22:20.794931+0000 mon.a (mon.0) 634 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:22:22.259 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:21 vm05 bash[20070]: audit 2026-03-09T14:22:20.794931+0000 mon.a (mon.0) 634 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:22:22.259 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:21 vm05 bash[20070]: cephadm 2026-03-09T14:22:20.795240+0000 mgr.x (mgr.14150) 235 : cephadm [INF] Adjusting osd_memory_target on vm05 to 1517M 2026-03-09T14:22:22.259 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:21 vm05 bash[20070]: cephadm 2026-03-09T14:22:20.795240+0000 mgr.x (mgr.14150) 235 : cephadm [INF] Adjusting osd_memory_target on vm05 to 1517M 2026-03-09T14:22:22.259 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:21 vm05 bash[20070]: audit 2026-03-09T14:22:20.798098+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:22.259 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:21 vm05 bash[20070]: audit 2026-03-09T14:22:20.798098+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:22.259 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:21 vm05 bash[20070]: audit 2026-03-09T14:22:20.799401+0000 mon.a (mon.0) 636 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:22.259 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:21 vm05 bash[20070]: audit 2026-03-09T14:22:20.799401+0000 mon.a (mon.0) 636 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:22.259 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:21 vm05 bash[20070]: audit 2026-03-09T14:22:20.799810+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:22:22.259 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:21 vm05 bash[20070]: audit 2026-03-09T14:22:20.799810+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:22:22.259 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:21 vm05 bash[20070]: audit 2026-03-09T14:22:20.803254+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:22.259 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:21 vm05 bash[20070]: audit 2026-03-09T14:22:20.803254+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:22.259 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:21 vm05 bash[20070]: cluster 2026-03-09T14:22:20.943641+0000 mgr.x (mgr.14150) 236 : cluster [DBG] pgmap v200: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:22.259 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:21 vm05 bash[20070]: cluster 2026-03-09T14:22:20.943641+0000 mgr.x (mgr.14150) 236 : cluster [DBG] pgmap v200: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:23.871 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:22:24.173 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:22:24.173 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":49,"fsid":"3346de4a-1bc2-11f1-95ae-3796c8433614","created":"2026-03-09T14:16:38.209278+0000","modified":"2026-03-09T14:22:15.312087+0000","last_up_change":"2026-03-09T14:22:14.299826+0000","last_in_change":"2026-03-09T14:21:57.783293+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-09T14:19:36.973181+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"21","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"6f17c91b-de65-4e8c-9e74-a512b4d9d1c9","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":25,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6802","nonce":1075788976},{"type":"v1","addr":"192.168.123.103:6803","nonce":1075788976}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6804","nonce":1075788976},{"type":"v1","addr":"192.168.123.103:6805","nonce":1075788976}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6808","nonce":1075788976},{"type":"v1","addr":"192.168.123.103:6809","nonce":1075788976}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6806","nonce":1075788976},{"type":"v1","addr":"192.168.123.103:6807","nonce":1075788976}]},"public_addr":"192.168.123.103:6803/1075788976","cluster_addr":"192.168.123.103:6805/1075788976","heartbeat_back_addr":"192.168.123.103:6809/1075788976","heartbeat_front_addr":"192.168.123.103:6807/1075788976","state":["exists","up"]},{"osd":1,"uuid":"0ee8add4-d132-4666-b7ad-a8416c3c05bf","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6810","nonce":2015646488},{"type":"v1","addr":"192.168.123.103:6811","nonce":2015646488}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6812","nonce":2015646488},{"type":"v1","addr":"192.168.123.103:6813","nonce":2015646488}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6816","nonce":2015646488},{"type":"v1","addr":"192.168.123.103:6817","nonce":2015646488}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6814","nonce":2015646488},{"type":"v1","addr":"192.168.123.103:6815","nonce":2015646488}]},"public_addr":"192.168.123.103:6811/2015646488","cluster_addr":"192.168.123.103:6813/2015646488","heartbeat_back_addr":"192.168.123.103:6817/2015646488","heartbeat_front_addr":"192.168.123.103:6815/2015646488","state":["exists","up"]},{"osd":2,"uuid":"f76cddf6-4356-443b-8d69-5d0e6d8a3803","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":19,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6800","nonce":1899064825},{"type":"v1","addr":"192.168.123.104:6801","nonce":1899064825}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6802","nonce":1899064825},{"type":"v1","addr":"192.168.123.104:6803","nonce":1899064825}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6806","nonce":1899064825},{"type":"v1","addr":"192.168.123.104:6807","nonce":1899064825}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6804","nonce":1899064825},{"type":"v1","addr":"192.168.123.104:6805","nonce":1899064825}]},"public_addr":"192.168.123.104:6801/1899064825","cluster_addr":"192.168.123.104:6803/1899064825","heartbeat_back_addr":"192.168.123.104:6807/1899064825","heartbeat_front_addr":"192.168.123.104:6805/1899064825","state":["exists","up"]},{"osd":3,"uuid":"d1d9774a-a921-4ff4-9d67-c8545864b268","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":25,"up_thru":38,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6808","nonce":1600567220},{"type":"v1","addr":"192.168.123.104:6809","nonce":1600567220}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6810","nonce":1600567220},{"type":"v1","addr":"192.168.123.104:6811","nonce":1600567220}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6814","nonce":1600567220},{"type":"v1","addr":"192.168.123.104:6815","nonce":1600567220}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6812","nonce":1600567220},{"type":"v1","addr":"192.168.123.104:6813","nonce":1600567220}]},"public_addr":"192.168.123.104:6809/1600567220","cluster_addr":"192.168.123.104:6811/1600567220","heartbeat_back_addr":"192.168.123.104:6815/1600567220","heartbeat_front_addr":"192.168.123.104:6813/1600567220","state":["exists","up"]},{"osd":4,"uuid":"97a3c763-32a2-413f-8d3f-0e7163f512ed","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":31,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6816","nonce":3814952582},{"type":"v1","addr":"192.168.123.104:6817","nonce":3814952582}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6818","nonce":3814952582},{"type":"v1","addr":"192.168.123.104:6819","nonce":3814952582}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6822","nonce":3814952582},{"type":"v1","addr":"192.168.123.104:6823","nonce":3814952582}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6820","nonce":3814952582},{"type":"v1","addr":"192.168.123.104:6821","nonce":3814952582}]},"public_addr":"192.168.123.104:6817/3814952582","cluster_addr":"192.168.123.104:6819/3814952582","heartbeat_back_addr":"192.168.123.104:6823/3814952582","heartbeat_front_addr":"192.168.123.104:6821/3814952582","state":["exists","up"]},{"osd":5,"uuid":"628905a2-37b8-4495-89ad-022957204832","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":37,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6800","nonce":67591369},{"type":"v1","addr":"192.168.123.105:6801","nonce":67591369}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6802","nonce":67591369},{"type":"v1","addr":"192.168.123.105:6803","nonce":67591369}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6806","nonce":67591369},{"type":"v1","addr":"192.168.123.105:6807","nonce":67591369}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6804","nonce":67591369},{"type":"v1","addr":"192.168.123.105:6805","nonce":67591369}]},"public_addr":"192.168.123.105:6801/67591369","cluster_addr":"192.168.123.105:6803/67591369","heartbeat_back_addr":"192.168.123.105:6807/67591369","heartbeat_front_addr":"192.168.123.105:6805/67591369","state":["exists","up"]},{"osd":6,"uuid":"bf677cce-a472-46ab-9a91-492f3b2e689b","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":43,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6808","nonce":1573740685},{"type":"v1","addr":"192.168.123.105:6809","nonce":1573740685}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6810","nonce":1573740685},{"type":"v1","addr":"192.168.123.105:6811","nonce":1573740685}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6814","nonce":1573740685},{"type":"v1","addr":"192.168.123.105:6815","nonce":1573740685}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6812","nonce":1573740685},{"type":"v1","addr":"192.168.123.105:6813","nonce":1573740685}]},"public_addr":"192.168.123.105:6809/1573740685","cluster_addr":"192.168.123.105:6811/1573740685","heartbeat_back_addr":"192.168.123.105:6815/1573740685","heartbeat_front_addr":"192.168.123.105:6813/1573740685","state":["exists","up"]},{"osd":7,"uuid":"377ff461-7194-48e3-8093-29ef296bd4de","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":48,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6816","nonce":3385956239},{"type":"v1","addr":"192.168.123.105:6817","nonce":3385956239}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6818","nonce":3385956239},{"type":"v1","addr":"192.168.123.105:6819","nonce":3385956239}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6822","nonce":3385956239},{"type":"v1","addr":"192.168.123.105:6823","nonce":3385956239}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6820","nonce":3385956239},{"type":"v1","addr":"192.168.123.105:6821","nonce":3385956239}]},"public_addr":"192.168.123.105:6817/3385956239","cluster_addr":"192.168.123.105:6819/3385956239","heartbeat_back_addr":"192.168.123.105:6823/3385956239","heartbeat_front_addr":"192.168.123.105:6821/3385956239","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:18:31.521402+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:19:04.025881+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:19:34.043331+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:20:06.342628+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:20:38.397649+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:21:08.079874+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:21:40.198080+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:22:12.612686+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.103:0/3863832399":"2026-03-10T14:16:58.892776+0000","192.168.123.103:0/2873272369":"2026-03-10T14:16:58.892776+0000","192.168.123.103:0/134279448":"2026-03-10T14:16:58.892776+0000","192.168.123.103:0/2194687713":"2026-03-10T14:16:48.640079+0000","192.168.123.103:6801/623478427":"2026-03-10T14:16:48.640079+0000","192.168.123.103:6801/2250683817":"2026-03-10T14:16:58.892776+0000","192.168.123.103:6800/623478427":"2026-03-10T14:16:48.640079+0000","192.168.123.103:0/205210706":"2026-03-10T14:16:48.640079+0000","192.168.123.103:6800/2250683817":"2026-03-10T14:16:58.892776+0000","192.168.123.103:0/1825529667":"2026-03-10T14:16:48.640079+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T14:22:24.183 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:23 vm03 bash[17524]: cluster 2026-03-09T14:22:22.943908+0000 mgr.x (mgr.14150) 237 : cluster [DBG] pgmap v201: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:24.183 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:23 vm03 bash[17524]: cluster 2026-03-09T14:22:22.943908+0000 mgr.x (mgr.14150) 237 : cluster [DBG] pgmap v201: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:24.227 INFO:tasks.cephadm.ceph_manager.ceph:[{'pool': 1, 'pool_name': '.mgr', 'create_time': '2026-03-09T14:19:36.973181+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'is_stretch_pool': False, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '21', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_type': 'Fair distribution', 'score_acting': 7.889999866485596, 'score_stable': 7.889999866485596, 'optimal_score': 0.3799999952316284, 'raw_score_acting': 3, 'raw_score_stable': 3, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}] 2026-03-09T14:22:24.227 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph osd pool get .mgr pg_num 2026-03-09T14:22:24.257 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:23 vm04 bash[19581]: cluster 2026-03-09T14:22:22.943908+0000 mgr.x (mgr.14150) 237 : cluster [DBG] pgmap v201: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:24.257 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:23 vm04 bash[19581]: cluster 2026-03-09T14:22:22.943908+0000 mgr.x (mgr.14150) 237 : cluster [DBG] pgmap v201: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:24.258 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:24 vm05 bash[20070]: cluster 2026-03-09T14:22:22.943908+0000 mgr.x (mgr.14150) 237 : cluster [DBG] pgmap v201: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:24.258 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:24 vm05 bash[20070]: cluster 2026-03-09T14:22:22.943908+0000 mgr.x (mgr.14150) 237 : cluster [DBG] pgmap v201: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:25.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:25 vm04 bash[19581]: audit 2026-03-09T14:22:24.172612+0000 mon.b (mon.2) 17 : audit [DBG] from='client.? 192.168.123.103:0/3417334409' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:22:25.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:25 vm04 bash[19581]: audit 2026-03-09T14:22:24.172612+0000 mon.b (mon.2) 17 : audit [DBG] from='client.? 192.168.123.103:0/3417334409' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:22:25.258 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:25 vm05 bash[20070]: audit 2026-03-09T14:22:24.172612+0000 mon.b (mon.2) 17 : audit [DBG] from='client.? 192.168.123.103:0/3417334409' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:22:25.258 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:25 vm05 bash[20070]: audit 2026-03-09T14:22:24.172612+0000 mon.b (mon.2) 17 : audit [DBG] from='client.? 192.168.123.103:0/3417334409' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:22:25.303 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:25 vm03 bash[17524]: audit 2026-03-09T14:22:24.172612+0000 mon.b (mon.2) 17 : audit [DBG] from='client.? 192.168.123.103:0/3417334409' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:22:25.303 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:25 vm03 bash[17524]: audit 2026-03-09T14:22:24.172612+0000 mon.b (mon.2) 17 : audit [DBG] from='client.? 192.168.123.103:0/3417334409' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:22:26.303 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:26 vm03 bash[17524]: cluster 2026-03-09T14:22:24.944178+0000 mgr.x (mgr.14150) 238 : cluster [DBG] pgmap v202: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:26.303 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:26 vm03 bash[17524]: cluster 2026-03-09T14:22:24.944178+0000 mgr.x (mgr.14150) 238 : cluster [DBG] pgmap v202: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:26.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:26 vm04 bash[19581]: cluster 2026-03-09T14:22:24.944178+0000 mgr.x (mgr.14150) 238 : cluster [DBG] pgmap v202: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:26.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:26 vm04 bash[19581]: cluster 2026-03-09T14:22:24.944178+0000 mgr.x (mgr.14150) 238 : cluster [DBG] pgmap v202: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:26.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:26 vm05 bash[20070]: cluster 2026-03-09T14:22:24.944178+0000 mgr.x (mgr.14150) 238 : cluster [DBG] pgmap v202: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:26.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:26 vm05 bash[20070]: cluster 2026-03-09T14:22:24.944178+0000 mgr.x (mgr.14150) 238 : cluster [DBG] pgmap v202: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:27.886 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:22:28.141 INFO:teuthology.orchestra.run.vm03.stdout:pg_num: 1 2026-03-09T14:22:28.151 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:28 vm03 bash[17524]: cluster 2026-03-09T14:22:26.944447+0000 mgr.x (mgr.14150) 239 : cluster [DBG] pgmap v203: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:28.151 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:28 vm03 bash[17524]: cluster 2026-03-09T14:22:26.944447+0000 mgr.x (mgr.14150) 239 : cluster [DBG] pgmap v203: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:28.196 INFO:tasks.cephadm:Adding ceph.iscsi.iscsi.a on vm03 2026-03-09T14:22:28.196 INFO:tasks.cephadm:Adding ceph.iscsi.iscsi.b on vm05 2026-03-09T14:22:28.196 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph osd pool create datapool 3 3 replicated 2026-03-09T14:22:28.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:28 vm04 bash[19581]: cluster 2026-03-09T14:22:26.944447+0000 mgr.x (mgr.14150) 239 : cluster [DBG] pgmap v203: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:28.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:28 vm04 bash[19581]: cluster 2026-03-09T14:22:26.944447+0000 mgr.x (mgr.14150) 239 : cluster [DBG] pgmap v203: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:28.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:28 vm05 bash[20070]: cluster 2026-03-09T14:22:26.944447+0000 mgr.x (mgr.14150) 239 : cluster [DBG] pgmap v203: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:28.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:28 vm05 bash[20070]: cluster 2026-03-09T14:22:26.944447+0000 mgr.x (mgr.14150) 239 : cluster [DBG] pgmap v203: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:29.303 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:29 vm03 bash[17524]: audit 2026-03-09T14:22:28.141854+0000 mon.a (mon.0) 639 : audit [DBG] from='client.? 192.168.123.103:0/128686912' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T14:22:29.303 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:29 vm03 bash[17524]: audit 2026-03-09T14:22:28.141854+0000 mon.a (mon.0) 639 : audit [DBG] from='client.? 192.168.123.103:0/128686912' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T14:22:29.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:29 vm04 bash[19581]: audit 2026-03-09T14:22:28.141854+0000 mon.a (mon.0) 639 : audit [DBG] from='client.? 192.168.123.103:0/128686912' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T14:22:29.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:29 vm04 bash[19581]: audit 2026-03-09T14:22:28.141854+0000 mon.a (mon.0) 639 : audit [DBG] from='client.? 192.168.123.103:0/128686912' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T14:22:29.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:29 vm05 bash[20070]: audit 2026-03-09T14:22:28.141854+0000 mon.a (mon.0) 639 : audit [DBG] from='client.? 192.168.123.103:0/128686912' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T14:22:29.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:29 vm05 bash[20070]: audit 2026-03-09T14:22:28.141854+0000 mon.a (mon.0) 639 : audit [DBG] from='client.? 192.168.123.103:0/128686912' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T14:22:30.303 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:30 vm03 bash[17524]: cluster 2026-03-09T14:22:28.944735+0000 mgr.x (mgr.14150) 240 : cluster [DBG] pgmap v204: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:30.303 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:30 vm03 bash[17524]: cluster 2026-03-09T14:22:28.944735+0000 mgr.x (mgr.14150) 240 : cluster [DBG] pgmap v204: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:30.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:30 vm04 bash[19581]: cluster 2026-03-09T14:22:28.944735+0000 mgr.x (mgr.14150) 240 : cluster [DBG] pgmap v204: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:30.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:30 vm04 bash[19581]: cluster 2026-03-09T14:22:28.944735+0000 mgr.x (mgr.14150) 240 : cluster [DBG] pgmap v204: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:30.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:30 vm05 bash[20070]: cluster 2026-03-09T14:22:28.944735+0000 mgr.x (mgr.14150) 240 : cluster [DBG] pgmap v204: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:30.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:30 vm05 bash[20070]: cluster 2026-03-09T14:22:28.944735+0000 mgr.x (mgr.14150) 240 : cluster [DBG] pgmap v204: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:32.303 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:32 vm03 bash[17524]: cluster 2026-03-09T14:22:30.945079+0000 mgr.x (mgr.14150) 241 : cluster [DBG] pgmap v205: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:32.303 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:32 vm03 bash[17524]: cluster 2026-03-09T14:22:30.945079+0000 mgr.x (mgr.14150) 241 : cluster [DBG] pgmap v205: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:32.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:32 vm04 bash[19581]: cluster 2026-03-09T14:22:30.945079+0000 mgr.x (mgr.14150) 241 : cluster [DBG] pgmap v205: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:32.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:32 vm04 bash[19581]: cluster 2026-03-09T14:22:30.945079+0000 mgr.x (mgr.14150) 241 : cluster [DBG] pgmap v205: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:32.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:32 vm05 bash[20070]: cluster 2026-03-09T14:22:30.945079+0000 mgr.x (mgr.14150) 241 : cluster [DBG] pgmap v205: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:32.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:32 vm05 bash[20070]: cluster 2026-03-09T14:22:30.945079+0000 mgr.x (mgr.14150) 241 : cluster [DBG] pgmap v205: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:32.814 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.c/config 2026-03-09T14:22:34.042 INFO:teuthology.orchestra.run.vm05.stderr:pool 'datapool' created 2026-03-09T14:22:34.105 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- rbd pool init datapool 2026-03-09T14:22:34.303 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:34 vm03 bash[17524]: cluster 2026-03-09T14:22:32.945397+0000 mgr.x (mgr.14150) 242 : cluster [DBG] pgmap v206: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:34.303 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:34 vm03 bash[17524]: cluster 2026-03-09T14:22:32.945397+0000 mgr.x (mgr.14150) 242 : cluster [DBG] pgmap v206: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:34.303 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:34 vm03 bash[17524]: audit 2026-03-09T14:22:33.073311+0000 mon.c (mon.1) 20 : audit [INF] from='client.? 192.168.123.105:0/109541897' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T14:22:34.303 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:34 vm03 bash[17524]: audit 2026-03-09T14:22:33.073311+0000 mon.c (mon.1) 20 : audit [INF] from='client.? 192.168.123.105:0/109541897' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T14:22:34.303 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:34 vm03 bash[17524]: audit 2026-03-09T14:22:33.074222+0000 mon.a (mon.0) 640 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T14:22:34.303 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:34 vm03 bash[17524]: audit 2026-03-09T14:22:33.074222+0000 mon.a (mon.0) 640 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T14:22:34.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:34 vm04 bash[19581]: cluster 2026-03-09T14:22:32.945397+0000 mgr.x (mgr.14150) 242 : cluster [DBG] pgmap v206: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:34.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:34 vm04 bash[19581]: cluster 2026-03-09T14:22:32.945397+0000 mgr.x (mgr.14150) 242 : cluster [DBG] pgmap v206: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:34.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:34 vm04 bash[19581]: audit 2026-03-09T14:22:33.073311+0000 mon.c (mon.1) 20 : audit [INF] from='client.? 192.168.123.105:0/109541897' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T14:22:34.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:34 vm04 bash[19581]: audit 2026-03-09T14:22:33.073311+0000 mon.c (mon.1) 20 : audit [INF] from='client.? 192.168.123.105:0/109541897' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T14:22:34.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:34 vm04 bash[19581]: audit 2026-03-09T14:22:33.074222+0000 mon.a (mon.0) 640 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T14:22:34.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:34 vm04 bash[19581]: audit 2026-03-09T14:22:33.074222+0000 mon.a (mon.0) 640 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T14:22:34.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:34 vm05 bash[20070]: cluster 2026-03-09T14:22:32.945397+0000 mgr.x (mgr.14150) 242 : cluster [DBG] pgmap v206: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:34.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:34 vm05 bash[20070]: cluster 2026-03-09T14:22:32.945397+0000 mgr.x (mgr.14150) 242 : cluster [DBG] pgmap v206: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:34.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:34 vm05 bash[20070]: audit 2026-03-09T14:22:33.073311+0000 mon.c (mon.1) 20 : audit [INF] from='client.? 192.168.123.105:0/109541897' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T14:22:34.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:34 vm05 bash[20070]: audit 2026-03-09T14:22:33.073311+0000 mon.c (mon.1) 20 : audit [INF] from='client.? 192.168.123.105:0/109541897' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T14:22:34.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:34 vm05 bash[20070]: audit 2026-03-09T14:22:33.074222+0000 mon.a (mon.0) 640 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T14:22:34.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:34 vm05 bash[20070]: audit 2026-03-09T14:22:33.074222+0000 mon.a (mon.0) 640 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T14:22:35.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:35 vm04 bash[19581]: audit 2026-03-09T14:22:34.038313+0000 mon.a (mon.0) 641 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-09T14:22:35.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:35 vm04 bash[19581]: audit 2026-03-09T14:22:34.038313+0000 mon.a (mon.0) 641 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-09T14:22:35.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:35 vm04 bash[19581]: cluster 2026-03-09T14:22:34.040413+0000 mon.a (mon.0) 642 : cluster [DBG] osdmap e50: 8 total, 8 up, 8 in 2026-03-09T14:22:35.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:35 vm04 bash[19581]: cluster 2026-03-09T14:22:34.040413+0000 mon.a (mon.0) 642 : cluster [DBG] osdmap e50: 8 total, 8 up, 8 in 2026-03-09T14:22:35.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:35 vm05 bash[20070]: audit 2026-03-09T14:22:34.038313+0000 mon.a (mon.0) 641 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-09T14:22:35.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:35 vm05 bash[20070]: audit 2026-03-09T14:22:34.038313+0000 mon.a (mon.0) 641 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-09T14:22:35.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:35 vm05 bash[20070]: cluster 2026-03-09T14:22:34.040413+0000 mon.a (mon.0) 642 : cluster [DBG] osdmap e50: 8 total, 8 up, 8 in 2026-03-09T14:22:35.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:35 vm05 bash[20070]: cluster 2026-03-09T14:22:34.040413+0000 mon.a (mon.0) 642 : cluster [DBG] osdmap e50: 8 total, 8 up, 8 in 2026-03-09T14:22:35.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:35 vm03 bash[17524]: audit 2026-03-09T14:22:34.038313+0000 mon.a (mon.0) 641 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-09T14:22:35.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:35 vm03 bash[17524]: audit 2026-03-09T14:22:34.038313+0000 mon.a (mon.0) 641 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-09T14:22:35.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:35 vm03 bash[17524]: cluster 2026-03-09T14:22:34.040413+0000 mon.a (mon.0) 642 : cluster [DBG] osdmap e50: 8 total, 8 up, 8 in 2026-03-09T14:22:35.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:35 vm03 bash[17524]: cluster 2026-03-09T14:22:34.040413+0000 mon.a (mon.0) 642 : cluster [DBG] osdmap e50: 8 total, 8 up, 8 in 2026-03-09T14:22:36.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:36 vm04 bash[19581]: cluster 2026-03-09T14:22:34.945652+0000 mgr.x (mgr.14150) 243 : cluster [DBG] pgmap v208: 4 pgs: 2 creating+peering, 1 active+clean, 1 unknown; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:36.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:36 vm04 bash[19581]: cluster 2026-03-09T14:22:34.945652+0000 mgr.x (mgr.14150) 243 : cluster [DBG] pgmap v208: 4 pgs: 2 creating+peering, 1 active+clean, 1 unknown; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:36.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:36 vm04 bash[19581]: cluster 2026-03-09T14:22:35.057641+0000 mon.a (mon.0) 643 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-09T14:22:36.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:36 vm04 bash[19581]: cluster 2026-03-09T14:22:35.057641+0000 mon.a (mon.0) 643 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-09T14:22:36.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:36 vm05 bash[20070]: cluster 2026-03-09T14:22:34.945652+0000 mgr.x (mgr.14150) 243 : cluster [DBG] pgmap v208: 4 pgs: 2 creating+peering, 1 active+clean, 1 unknown; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:36.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:36 vm05 bash[20070]: cluster 2026-03-09T14:22:34.945652+0000 mgr.x (mgr.14150) 243 : cluster [DBG] pgmap v208: 4 pgs: 2 creating+peering, 1 active+clean, 1 unknown; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:36.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:36 vm05 bash[20070]: cluster 2026-03-09T14:22:35.057641+0000 mon.a (mon.0) 643 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-09T14:22:36.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:36 vm05 bash[20070]: cluster 2026-03-09T14:22:35.057641+0000 mon.a (mon.0) 643 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-09T14:22:36.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:36 vm03 bash[17524]: cluster 2026-03-09T14:22:34.945652+0000 mgr.x (mgr.14150) 243 : cluster [DBG] pgmap v208: 4 pgs: 2 creating+peering, 1 active+clean, 1 unknown; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:36.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:36 vm03 bash[17524]: cluster 2026-03-09T14:22:34.945652+0000 mgr.x (mgr.14150) 243 : cluster [DBG] pgmap v208: 4 pgs: 2 creating+peering, 1 active+clean, 1 unknown; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:36.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:36 vm03 bash[17524]: cluster 2026-03-09T14:22:35.057641+0000 mon.a (mon.0) 643 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-09T14:22:36.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:36 vm03 bash[17524]: cluster 2026-03-09T14:22:35.057641+0000 mon.a (mon.0) 643 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-09T14:22:37.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:37 vm04 bash[19581]: cluster 2026-03-09T14:22:36.058196+0000 mon.a (mon.0) 644 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-09T14:22:37.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:37 vm04 bash[19581]: cluster 2026-03-09T14:22:36.058196+0000 mon.a (mon.0) 644 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-09T14:22:37.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:37 vm05 bash[20070]: cluster 2026-03-09T14:22:36.058196+0000 mon.a (mon.0) 644 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-09T14:22:37.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:37 vm05 bash[20070]: cluster 2026-03-09T14:22:36.058196+0000 mon.a (mon.0) 644 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-09T14:22:37.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:37 vm03 bash[17524]: cluster 2026-03-09T14:22:36.058196+0000 mon.a (mon.0) 644 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-09T14:22:37.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:37 vm03 bash[17524]: cluster 2026-03-09T14:22:36.058196+0000 mon.a (mon.0) 644 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-09T14:22:38.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:38 vm04 bash[19581]: cluster 2026-03-09T14:22:36.945943+0000 mgr.x (mgr.14150) 244 : cluster [DBG] pgmap v211: 4 pgs: 2 creating+peering, 1 active+clean, 1 unknown; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:38.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:38 vm04 bash[19581]: cluster 2026-03-09T14:22:36.945943+0000 mgr.x (mgr.14150) 244 : cluster [DBG] pgmap v211: 4 pgs: 2 creating+peering, 1 active+clean, 1 unknown; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:38.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:38 vm05 bash[20070]: cluster 2026-03-09T14:22:36.945943+0000 mgr.x (mgr.14150) 244 : cluster [DBG] pgmap v211: 4 pgs: 2 creating+peering, 1 active+clean, 1 unknown; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:38.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:38 vm05 bash[20070]: cluster 2026-03-09T14:22:36.945943+0000 mgr.x (mgr.14150) 244 : cluster [DBG] pgmap v211: 4 pgs: 2 creating+peering, 1 active+clean, 1 unknown; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:38.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:38 vm03 bash[17524]: cluster 2026-03-09T14:22:36.945943+0000 mgr.x (mgr.14150) 244 : cluster [DBG] pgmap v211: 4 pgs: 2 creating+peering, 1 active+clean, 1 unknown; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:38.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:38 vm03 bash[17524]: cluster 2026-03-09T14:22:36.945943+0000 mgr.x (mgr.14150) 244 : cluster [DBG] pgmap v211: 4 pgs: 2 creating+peering, 1 active+clean, 1 unknown; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:38.728 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.c/config 2026-03-09T14:22:39.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:39 vm04 bash[19581]: audit 2026-03-09T14:22:38.849098+0000 mon.a (mon.0) 645 : audit [INF] from='client.? 192.168.123.105:0/1040241800' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T14:22:39.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:39 vm04 bash[19581]: audit 2026-03-09T14:22:38.849098+0000 mon.a (mon.0) 645 : audit [INF] from='client.? 192.168.123.105:0/1040241800' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T14:22:39.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:39 vm05 bash[20070]: audit 2026-03-09T14:22:38.849098+0000 mon.a (mon.0) 645 : audit [INF] from='client.? 192.168.123.105:0/1040241800' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T14:22:39.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:39 vm05 bash[20070]: audit 2026-03-09T14:22:38.849098+0000 mon.a (mon.0) 645 : audit [INF] from='client.? 192.168.123.105:0/1040241800' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T14:22:39.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:39 vm03 bash[17524]: audit 2026-03-09T14:22:38.849098+0000 mon.a (mon.0) 645 : audit [INF] from='client.? 192.168.123.105:0/1040241800' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T14:22:39.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:39 vm03 bash[17524]: audit 2026-03-09T14:22:38.849098+0000 mon.a (mon.0) 645 : audit [INF] from='client.? 192.168.123.105:0/1040241800' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T14:22:40.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:40 vm04 bash[19581]: cluster 2026-03-09T14:22:38.946195+0000 mgr.x (mgr.14150) 245 : cluster [DBG] pgmap v212: 4 pgs: 2 creating+peering, 2 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:40.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:40 vm04 bash[19581]: cluster 2026-03-09T14:22:38.946195+0000 mgr.x (mgr.14150) 245 : cluster [DBG] pgmap v212: 4 pgs: 2 creating+peering, 2 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:40.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:40 vm04 bash[19581]: audit 2026-03-09T14:22:39.077780+0000 mon.a (mon.0) 646 : audit [INF] from='client.? 192.168.123.105:0/1040241800' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-09T14:22:40.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:40 vm04 bash[19581]: audit 2026-03-09T14:22:39.077780+0000 mon.a (mon.0) 646 : audit [INF] from='client.? 192.168.123.105:0/1040241800' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-09T14:22:40.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:40 vm04 bash[19581]: cluster 2026-03-09T14:22:39.079600+0000 mon.a (mon.0) 647 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-09T14:22:40.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:40 vm04 bash[19581]: cluster 2026-03-09T14:22:39.079600+0000 mon.a (mon.0) 647 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-09T14:22:40.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:40 vm05 bash[20070]: cluster 2026-03-09T14:22:38.946195+0000 mgr.x (mgr.14150) 245 : cluster [DBG] pgmap v212: 4 pgs: 2 creating+peering, 2 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:40.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:40 vm05 bash[20070]: cluster 2026-03-09T14:22:38.946195+0000 mgr.x (mgr.14150) 245 : cluster [DBG] pgmap v212: 4 pgs: 2 creating+peering, 2 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:40.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:40 vm05 bash[20070]: audit 2026-03-09T14:22:39.077780+0000 mon.a (mon.0) 646 : audit [INF] from='client.? 192.168.123.105:0/1040241800' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-09T14:22:40.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:40 vm05 bash[20070]: audit 2026-03-09T14:22:39.077780+0000 mon.a (mon.0) 646 : audit [INF] from='client.? 192.168.123.105:0/1040241800' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-09T14:22:40.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:40 vm05 bash[20070]: cluster 2026-03-09T14:22:39.079600+0000 mon.a (mon.0) 647 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-09T14:22:40.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:40 vm05 bash[20070]: cluster 2026-03-09T14:22:39.079600+0000 mon.a (mon.0) 647 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-09T14:22:40.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:40 vm03 bash[17524]: cluster 2026-03-09T14:22:38.946195+0000 mgr.x (mgr.14150) 245 : cluster [DBG] pgmap v212: 4 pgs: 2 creating+peering, 2 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:40.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:40 vm03 bash[17524]: cluster 2026-03-09T14:22:38.946195+0000 mgr.x (mgr.14150) 245 : cluster [DBG] pgmap v212: 4 pgs: 2 creating+peering, 2 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:40.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:40 vm03 bash[17524]: audit 2026-03-09T14:22:39.077780+0000 mon.a (mon.0) 646 : audit [INF] from='client.? 192.168.123.105:0/1040241800' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-09T14:22:40.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:40 vm03 bash[17524]: audit 2026-03-09T14:22:39.077780+0000 mon.a (mon.0) 646 : audit [INF] from='client.? 192.168.123.105:0/1040241800' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-09T14:22:40.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:40 vm03 bash[17524]: cluster 2026-03-09T14:22:39.079600+0000 mon.a (mon.0) 647 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-09T14:22:40.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:40 vm03 bash[17524]: cluster 2026-03-09T14:22:39.079600+0000 mon.a (mon.0) 647 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-09T14:22:41.161 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph orch apply iscsi datapool admin admin --trusted_ip_list 192.168.123.103,192.168.123.105 --placement '2;vm03=iscsi.a;vm05=iscsi.b' 2026-03-09T14:22:41.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:41 vm04 bash[19581]: cluster 2026-03-09T14:22:40.084230+0000 mon.a (mon.0) 648 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-09T14:22:41.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:41 vm04 bash[19581]: cluster 2026-03-09T14:22:40.084230+0000 mon.a (mon.0) 648 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-09T14:22:41.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:41 vm05 bash[20070]: cluster 2026-03-09T14:22:40.084230+0000 mon.a (mon.0) 648 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-09T14:22:41.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:41 vm05 bash[20070]: cluster 2026-03-09T14:22:40.084230+0000 mon.a (mon.0) 648 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-09T14:22:41.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:41 vm03 bash[17524]: cluster 2026-03-09T14:22:40.084230+0000 mon.a (mon.0) 648 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-09T14:22:41.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:41 vm03 bash[17524]: cluster 2026-03-09T14:22:40.084230+0000 mon.a (mon.0) 648 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-09T14:22:42.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:42 vm04 bash[19581]: cluster 2026-03-09T14:22:40.946469+0000 mgr.x (mgr.14150) 246 : cluster [DBG] pgmap v215: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:42.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:42 vm04 bash[19581]: cluster 2026-03-09T14:22:40.946469+0000 mgr.x (mgr.14150) 246 : cluster [DBG] pgmap v215: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:42.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:42 vm04 bash[19581]: cluster 2026-03-09T14:22:41.096564+0000 mon.a (mon.0) 649 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-09T14:22:42.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:42 vm04 bash[19581]: cluster 2026-03-09T14:22:41.096564+0000 mon.a (mon.0) 649 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-09T14:22:42.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:42 vm05 bash[20070]: cluster 2026-03-09T14:22:40.946469+0000 mgr.x (mgr.14150) 246 : cluster [DBG] pgmap v215: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:42.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:42 vm05 bash[20070]: cluster 2026-03-09T14:22:40.946469+0000 mgr.x (mgr.14150) 246 : cluster [DBG] pgmap v215: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:42.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:42 vm05 bash[20070]: cluster 2026-03-09T14:22:41.096564+0000 mon.a (mon.0) 649 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-09T14:22:42.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:42 vm05 bash[20070]: cluster 2026-03-09T14:22:41.096564+0000 mon.a (mon.0) 649 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-09T14:22:42.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:42 vm03 bash[17524]: cluster 2026-03-09T14:22:40.946469+0000 mgr.x (mgr.14150) 246 : cluster [DBG] pgmap v215: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:42.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:42 vm03 bash[17524]: cluster 2026-03-09T14:22:40.946469+0000 mgr.x (mgr.14150) 246 : cluster [DBG] pgmap v215: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:42.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:42 vm03 bash[17524]: cluster 2026-03-09T14:22:41.096564+0000 mon.a (mon.0) 649 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-09T14:22:42.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:42 vm03 bash[17524]: cluster 2026-03-09T14:22:41.096564+0000 mon.a (mon.0) 649 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-09T14:22:44.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:44 vm04 bash[19581]: cluster 2026-03-09T14:22:42.946704+0000 mgr.x (mgr.14150) 247 : cluster [DBG] pgmap v217: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:44.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:44 vm04 bash[19581]: cluster 2026-03-09T14:22:42.946704+0000 mgr.x (mgr.14150) 247 : cluster [DBG] pgmap v217: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:44.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:44 vm05 bash[20070]: cluster 2026-03-09T14:22:42.946704+0000 mgr.x (mgr.14150) 247 : cluster [DBG] pgmap v217: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:44.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:44 vm05 bash[20070]: cluster 2026-03-09T14:22:42.946704+0000 mgr.x (mgr.14150) 247 : cluster [DBG] pgmap v217: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:44.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:44 vm03 bash[17524]: cluster 2026-03-09T14:22:42.946704+0000 mgr.x (mgr.14150) 247 : cluster [DBG] pgmap v217: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:44.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:44 vm03 bash[17524]: cluster 2026-03-09T14:22:42.946704+0000 mgr.x (mgr.14150) 247 : cluster [DBG] pgmap v217: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:22:45.784 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.c/config 2026-03-09T14:22:46.048 INFO:teuthology.orchestra.run.vm05.stdout:Scheduled iscsi.datapool update... 2026-03-09T14:22:46.129 INFO:tasks.cephadm:Distributing iscsi-gateway.cfg... 2026-03-09T14:22:46.129 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T14:22:46.129 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/ceph/iscsi-gateway.cfg 2026-03-09T14:22:46.137 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T14:22:46.137 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/ceph/iscsi-gateway.cfg 2026-03-09T14:22:46.144 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T14:22:46.144 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/ceph/iscsi-gateway.cfg 2026-03-09T14:22:46.153 DEBUG:teuthology.orchestra.run.vm03:iscsi.iscsi.a> sudo journalctl -f -n 0 -u ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@iscsi.iscsi.a.service 2026-03-09T14:22:46.181 DEBUG:teuthology.orchestra.run.vm05:iscsi.iscsi.b> sudo journalctl -f -n 0 -u ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@iscsi.iscsi.b.service 2026-03-09T14:22:46.198 INFO:tasks.cephadm:Setting up client nodes... 2026-03-09T14:22:46.198 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph auth get-or-create client.0 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-09T14:22:46.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:46 vm04 bash[19581]: cluster 2026-03-09T14:22:44.946940+0000 mgr.x (mgr.14150) 248 : cluster [DBG] pgmap v218: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 341 B/s wr, 0 op/s 2026-03-09T14:22:46.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:46 vm04 bash[19581]: cluster 2026-03-09T14:22:44.946940+0000 mgr.x (mgr.14150) 248 : cluster [DBG] pgmap v218: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 341 B/s wr, 0 op/s 2026-03-09T14:22:46.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:46 vm04 bash[19581]: audit 2026-03-09T14:22:46.047612+0000 mon.a (mon.0) 650 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:46.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:46 vm04 bash[19581]: audit 2026-03-09T14:22:46.047612+0000 mon.a (mon.0) 650 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:46.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:46 vm04 bash[19581]: audit 2026-03-09T14:22:46.048610+0000 mon.a (mon.0) 651 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:22:46.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:46 vm04 bash[19581]: audit 2026-03-09T14:22:46.048610+0000 mon.a (mon.0) 651 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:22:46.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:46 vm05 bash[20070]: cluster 2026-03-09T14:22:44.946940+0000 mgr.x (mgr.14150) 248 : cluster [DBG] pgmap v218: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 341 B/s wr, 0 op/s 2026-03-09T14:22:46.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:46 vm05 bash[20070]: cluster 2026-03-09T14:22:44.946940+0000 mgr.x (mgr.14150) 248 : cluster [DBG] pgmap v218: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 341 B/s wr, 0 op/s 2026-03-09T14:22:46.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:46 vm05 bash[20070]: audit 2026-03-09T14:22:46.047612+0000 mon.a (mon.0) 650 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:46.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:46 vm05 bash[20070]: audit 2026-03-09T14:22:46.047612+0000 mon.a (mon.0) 650 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:46.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:46 vm05 bash[20070]: audit 2026-03-09T14:22:46.048610+0000 mon.a (mon.0) 651 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:22:46.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:46 vm05 bash[20070]: audit 2026-03-09T14:22:46.048610+0000 mon.a (mon.0) 651 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:22:46.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:46 vm03 bash[17524]: cluster 2026-03-09T14:22:44.946940+0000 mgr.x (mgr.14150) 248 : cluster [DBG] pgmap v218: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 341 B/s wr, 0 op/s 2026-03-09T14:22:46.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:46 vm03 bash[17524]: cluster 2026-03-09T14:22:44.946940+0000 mgr.x (mgr.14150) 248 : cluster [DBG] pgmap v218: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 341 B/s wr, 0 op/s 2026-03-09T14:22:46.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:46 vm03 bash[17524]: audit 2026-03-09T14:22:46.047612+0000 mon.a (mon.0) 650 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:46.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:46 vm03 bash[17524]: audit 2026-03-09T14:22:46.047612+0000 mon.a (mon.0) 650 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:46.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:46 vm03 bash[17524]: audit 2026-03-09T14:22:46.048610+0000 mon.a (mon.0) 651 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:22:46.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:46 vm03 bash[17524]: audit 2026-03-09T14:22:46.048610+0000 mon.a (mon.0) 651 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:22:47.278 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:22:46 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:22:47.278 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:22:47 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:22:47.278 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:22:46 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:22:47.278 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:22:47 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:22:47.278 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:22:46 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:22:47.278 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:22:47 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:22:47.279 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:46 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:22:47.279 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:47 vm03 bash[17524]: audit 2026-03-09T14:22:46.042378+0000 mgr.x (mgr.14150) 249 : audit [DBG] from='client.14412 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.103,192.168.123.105", "placement": "2;vm03=iscsi.a;vm05=iscsi.b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:22:47.279 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:47 vm03 bash[17524]: audit 2026-03-09T14:22:46.042378+0000 mgr.x (mgr.14150) 249 : audit [DBG] from='client.14412 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.103,192.168.123.105", "placement": "2;vm03=iscsi.a;vm05=iscsi.b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:22:47.279 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:47 vm03 bash[17524]: cephadm 2026-03-09T14:22:46.043531+0000 mgr.x (mgr.14150) 250 : cephadm [INF] Saving service iscsi.datapool spec with placement vm03=iscsi.a;vm05=iscsi.b;count:2 2026-03-09T14:22:47.279 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:47 vm03 bash[17524]: cephadm 2026-03-09T14:22:46.043531+0000 mgr.x (mgr.14150) 250 : cephadm [INF] Saving service iscsi.datapool spec with placement vm03=iscsi.a;vm05=iscsi.b;count:2 2026-03-09T14:22:47.279 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:47 vm03 bash[17524]: audit 2026-03-09T14:22:46.420671+0000 mon.a (mon.0) 652 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:47.279 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:47 vm03 bash[17524]: audit 2026-03-09T14:22:46.420671+0000 mon.a (mon.0) 652 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:47.279 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:47 vm03 bash[17524]: audit 2026-03-09T14:22:46.421283+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:22:47.279 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:47 vm03 bash[17524]: audit 2026-03-09T14:22:46.421283+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:22:47.279 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:47 vm03 bash[17524]: audit 2026-03-09T14:22:46.427786+0000 mon.a (mon.0) 654 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:47.279 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:47 vm03 bash[17524]: audit 2026-03-09T14:22:46.427786+0000 mon.a (mon.0) 654 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:47.279 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:47 vm03 bash[17524]: audit 2026-03-09T14:22:46.429722+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T14:22:47.279 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:47 vm03 bash[17524]: audit 2026-03-09T14:22:46.429722+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T14:22:47.279 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:47 vm03 bash[17524]: audit 2026-03-09T14:22:46.431996+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T14:22:47.279 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:47 vm03 bash[17524]: audit 2026-03-09T14:22:46.431996+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T14:22:47.279 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:47 vm03 bash[17524]: audit 2026-03-09T14:22:46.437018+0000 mon.a (mon.0) 657 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:47.279 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:47 vm03 bash[17524]: audit 2026-03-09T14:22:46.437018+0000 mon.a (mon.0) 657 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:47.279 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:47 vm03 bash[17524]: cephadm 2026-03-09T14:22:46.437838+0000 mgr.x (mgr.14150) 251 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm03 2026-03-09T14:22:47.279 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:47 vm03 bash[17524]: cephadm 2026-03-09T14:22:46.437838+0000 mgr.x (mgr.14150) 251 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm03 2026-03-09T14:22:47.279 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:47 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:22:47.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:47 vm04 bash[19581]: audit 2026-03-09T14:22:46.042378+0000 mgr.x (mgr.14150) 249 : audit [DBG] from='client.14412 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.103,192.168.123.105", "placement": "2;vm03=iscsi.a;vm05=iscsi.b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:22:47.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:47 vm04 bash[19581]: audit 2026-03-09T14:22:46.042378+0000 mgr.x (mgr.14150) 249 : audit [DBG] from='client.14412 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.103,192.168.123.105", "placement": "2;vm03=iscsi.a;vm05=iscsi.b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:22:47.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:47 vm04 bash[19581]: cephadm 2026-03-09T14:22:46.043531+0000 mgr.x (mgr.14150) 250 : cephadm [INF] Saving service iscsi.datapool spec with placement vm03=iscsi.a;vm05=iscsi.b;count:2 2026-03-09T14:22:47.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:47 vm04 bash[19581]: cephadm 2026-03-09T14:22:46.043531+0000 mgr.x (mgr.14150) 250 : cephadm [INF] Saving service iscsi.datapool spec with placement vm03=iscsi.a;vm05=iscsi.b;count:2 2026-03-09T14:22:47.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:47 vm04 bash[19581]: audit 2026-03-09T14:22:46.420671+0000 mon.a (mon.0) 652 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:47.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:47 vm04 bash[19581]: audit 2026-03-09T14:22:46.420671+0000 mon.a (mon.0) 652 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:47.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:47 vm04 bash[19581]: audit 2026-03-09T14:22:46.421283+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:22:47.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:47 vm04 bash[19581]: audit 2026-03-09T14:22:46.421283+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:22:47.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:47 vm04 bash[19581]: audit 2026-03-09T14:22:46.427786+0000 mon.a (mon.0) 654 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:47.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:47 vm04 bash[19581]: audit 2026-03-09T14:22:46.427786+0000 mon.a (mon.0) 654 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:47.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:47 vm04 bash[19581]: audit 2026-03-09T14:22:46.429722+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T14:22:47.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:47 vm04 bash[19581]: audit 2026-03-09T14:22:46.429722+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T14:22:47.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:47 vm04 bash[19581]: audit 2026-03-09T14:22:46.431996+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T14:22:47.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:47 vm04 bash[19581]: audit 2026-03-09T14:22:46.431996+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T14:22:47.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:47 vm04 bash[19581]: audit 2026-03-09T14:22:46.437018+0000 mon.a (mon.0) 657 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:47.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:47 vm04 bash[19581]: audit 2026-03-09T14:22:46.437018+0000 mon.a (mon.0) 657 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:47.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:47 vm04 bash[19581]: cephadm 2026-03-09T14:22:46.437838+0000 mgr.x (mgr.14150) 251 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm03 2026-03-09T14:22:47.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:47 vm04 bash[19581]: cephadm 2026-03-09T14:22:46.437838+0000 mgr.x (mgr.14150) 251 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm03 2026-03-09T14:22:47.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:47 vm05 bash[20070]: audit 2026-03-09T14:22:46.042378+0000 mgr.x (mgr.14150) 249 : audit [DBG] from='client.14412 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.103,192.168.123.105", "placement": "2;vm03=iscsi.a;vm05=iscsi.b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:22:47.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:47 vm05 bash[20070]: audit 2026-03-09T14:22:46.042378+0000 mgr.x (mgr.14150) 249 : audit [DBG] from='client.14412 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.103,192.168.123.105", "placement": "2;vm03=iscsi.a;vm05=iscsi.b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:22:47.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:47 vm05 bash[20070]: cephadm 2026-03-09T14:22:46.043531+0000 mgr.x (mgr.14150) 250 : cephadm [INF] Saving service iscsi.datapool spec with placement vm03=iscsi.a;vm05=iscsi.b;count:2 2026-03-09T14:22:47.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:47 vm05 bash[20070]: cephadm 2026-03-09T14:22:46.043531+0000 mgr.x (mgr.14150) 250 : cephadm [INF] Saving service iscsi.datapool spec with placement vm03=iscsi.a;vm05=iscsi.b;count:2 2026-03-09T14:22:47.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:47 vm05 bash[20070]: audit 2026-03-09T14:22:46.420671+0000 mon.a (mon.0) 652 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:47.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:47 vm05 bash[20070]: audit 2026-03-09T14:22:46.420671+0000 mon.a (mon.0) 652 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:47.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:47 vm05 bash[20070]: audit 2026-03-09T14:22:46.421283+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:22:47.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:47 vm05 bash[20070]: audit 2026-03-09T14:22:46.421283+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:22:47.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:47 vm05 bash[20070]: audit 2026-03-09T14:22:46.427786+0000 mon.a (mon.0) 654 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:47.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:47 vm05 bash[20070]: audit 2026-03-09T14:22:46.427786+0000 mon.a (mon.0) 654 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:47.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:47 vm05 bash[20070]: audit 2026-03-09T14:22:46.429722+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T14:22:47.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:47 vm05 bash[20070]: audit 2026-03-09T14:22:46.429722+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T14:22:47.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:47 vm05 bash[20070]: audit 2026-03-09T14:22:46.431996+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T14:22:47.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:47 vm05 bash[20070]: audit 2026-03-09T14:22:46.431996+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T14:22:47.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:47 vm05 bash[20070]: audit 2026-03-09T14:22:46.437018+0000 mon.a (mon.0) 657 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:47.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:47 vm05 bash[20070]: audit 2026-03-09T14:22:46.437018+0000 mon.a (mon.0) 657 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:47.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:47 vm05 bash[20070]: cephadm 2026-03-09T14:22:46.437838+0000 mgr.x (mgr.14150) 251 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm03 2026-03-09T14:22:47.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:47 vm05 bash[20070]: cephadm 2026-03-09T14:22:46.437838+0000 mgr.x (mgr.14150) 251 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm03 2026-03-09T14:22:48.053 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:22:47 vm03 bash[37744]: debug Removing blocklisted entry for this host : 192.168.123.103:0/3863832399 2026-03-09T14:22:48.204 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:47 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:22:48.205 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:48 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:22:48.205 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:22:47 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:22:48.205 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:22:48 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:22:48.205 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:22:47 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:22:48.205 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:22:48 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:22:48.205 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:22:47 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:22:48.205 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:22:48 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:22:48.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:48 vm05 bash[20070]: cluster 2026-03-09T14:22:46.947229+0000 mgr.x (mgr.14150) 252 : cluster [DBG] pgmap v219: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 260 B/s wr, 0 op/s 2026-03-09T14:22:48.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:48 vm05 bash[20070]: cluster 2026-03-09T14:22:46.947229+0000 mgr.x (mgr.14150) 252 : cluster [DBG] pgmap v219: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 260 B/s wr, 0 op/s 2026-03-09T14:22:48.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:48 vm05 bash[20070]: audit 2026-03-09T14:22:47.325342+0000 mon.a (mon.0) 658 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:48.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:48 vm05 bash[20070]: audit 2026-03-09T14:22:47.325342+0000 mon.a (mon.0) 658 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:48.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:48 vm05 bash[20070]: audit 2026-03-09T14:22:47.331290+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:48.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:48 vm05 bash[20070]: audit 2026-03-09T14:22:47.331290+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:48.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:48 vm05 bash[20070]: audit 2026-03-09T14:22:47.335605+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:48.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:48 vm05 bash[20070]: audit 2026-03-09T14:22:47.335605+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:48.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:48 vm05 bash[20070]: audit 2026-03-09T14:22:47.337523+0000 mon.a (mon.0) 661 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.b", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T14:22:48.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:48 vm05 bash[20070]: audit 2026-03-09T14:22:47.337523+0000 mon.a (mon.0) 661 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.b", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T14:22:48.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:48 vm05 bash[20070]: audit 2026-03-09T14:22:47.340356+0000 mon.a (mon.0) 662 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.b", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T14:22:48.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:48 vm05 bash[20070]: audit 2026-03-09T14:22:47.340356+0000 mon.a (mon.0) 662 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.b", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T14:22:48.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:48 vm05 bash[20070]: audit 2026-03-09T14:22:47.343741+0000 mon.a (mon.0) 663 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:48.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:48 vm05 bash[20070]: audit 2026-03-09T14:22:47.343741+0000 mon.a (mon.0) 663 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:48.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:48 vm05 bash[20070]: cephadm 2026-03-09T14:22:47.344471+0000 mgr.x (mgr.14150) 253 : cephadm [INF] Deploying daemon iscsi.iscsi.b on vm05 2026-03-09T14:22:48.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:48 vm05 bash[20070]: cephadm 2026-03-09T14:22:47.344471+0000 mgr.x (mgr.14150) 253 : cephadm [INF] Deploying daemon iscsi.iscsi.b on vm05 2026-03-09T14:22:48.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:48 vm05 bash[20070]: audit 2026-03-09T14:22:47.786724+0000 mon.a (mon.0) 664 : audit [DBG] from='client.? 192.168.123.103:0/3424683558' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T14:22:48.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:48 vm05 bash[20070]: audit 2026-03-09T14:22:47.786724+0000 mon.a (mon.0) 664 : audit [DBG] from='client.? 192.168.123.103:0/3424683558' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T14:22:48.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:48 vm05 bash[20070]: audit 2026-03-09T14:22:47.980513+0000 mon.a (mon.0) 665 : audit [INF] from='client.? 192.168.123.103:0/65253590' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/3863832399"}]: dispatch 2026-03-09T14:22:48.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:48 vm05 bash[20070]: audit 2026-03-09T14:22:47.980513+0000 mon.a (mon.0) 665 : audit [INF] from='client.? 192.168.123.103:0/65253590' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/3863832399"}]: dispatch 2026-03-09T14:22:48.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:48 vm05 bash[20070]: audit 2026-03-09T14:22:48.250282+0000 mon.a (mon.0) 666 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:48.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:48 vm05 bash[20070]: audit 2026-03-09T14:22:48.250282+0000 mon.a (mon.0) 666 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:48.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:48 vm05 bash[20070]: audit 2026-03-09T14:22:48.255131+0000 mon.a (mon.0) 667 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:48.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:48 vm05 bash[20070]: audit 2026-03-09T14:22:48.255131+0000 mon.a (mon.0) 667 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:48.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:48 vm05 bash[20070]: audit 2026-03-09T14:22:48.259156+0000 mon.a (mon.0) 668 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:48.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:48 vm05 bash[20070]: audit 2026-03-09T14:22:48.259156+0000 mon.a (mon.0) 668 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:48.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:48 vm05 bash[20070]: audit 2026-03-09T14:22:48.269877+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:48.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:48 vm05 bash[20070]: audit 2026-03-09T14:22:48.269877+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:48.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:48 vm05 bash[20070]: audit 2026-03-09T14:22:48.285804+0000 mon.a (mon.0) 670 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:22:48.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:48 vm05 bash[20070]: audit 2026-03-09T14:22:48.285804+0000 mon.a (mon.0) 670 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:22:48.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:48 vm04 bash[19581]: cluster 2026-03-09T14:22:46.947229+0000 mgr.x (mgr.14150) 252 : cluster [DBG] pgmap v219: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 260 B/s wr, 0 op/s 2026-03-09T14:22:48.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:48 vm04 bash[19581]: cluster 2026-03-09T14:22:46.947229+0000 mgr.x (mgr.14150) 252 : cluster [DBG] pgmap v219: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 260 B/s wr, 0 op/s 2026-03-09T14:22:48.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:48 vm04 bash[19581]: audit 2026-03-09T14:22:47.325342+0000 mon.a (mon.0) 658 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:48.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:48 vm04 bash[19581]: audit 2026-03-09T14:22:47.325342+0000 mon.a (mon.0) 658 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:48.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:48 vm04 bash[19581]: audit 2026-03-09T14:22:47.331290+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:48.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:48 vm04 bash[19581]: audit 2026-03-09T14:22:47.331290+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:48.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:48 vm04 bash[19581]: audit 2026-03-09T14:22:47.335605+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:48.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:48 vm04 bash[19581]: audit 2026-03-09T14:22:47.335605+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:48.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:48 vm04 bash[19581]: audit 2026-03-09T14:22:47.337523+0000 mon.a (mon.0) 661 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.b", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T14:22:48.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:48 vm04 bash[19581]: audit 2026-03-09T14:22:47.337523+0000 mon.a (mon.0) 661 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.b", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T14:22:48.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:48 vm04 bash[19581]: audit 2026-03-09T14:22:47.340356+0000 mon.a (mon.0) 662 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.b", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T14:22:48.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:48 vm04 bash[19581]: audit 2026-03-09T14:22:47.340356+0000 mon.a (mon.0) 662 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.b", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T14:22:48.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:48 vm04 bash[19581]: audit 2026-03-09T14:22:47.343741+0000 mon.a (mon.0) 663 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:48.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:48 vm04 bash[19581]: audit 2026-03-09T14:22:47.343741+0000 mon.a (mon.0) 663 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:48.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:48 vm04 bash[19581]: cephadm 2026-03-09T14:22:47.344471+0000 mgr.x (mgr.14150) 253 : cephadm [INF] Deploying daemon iscsi.iscsi.b on vm05 2026-03-09T14:22:48.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:48 vm04 bash[19581]: cephadm 2026-03-09T14:22:47.344471+0000 mgr.x (mgr.14150) 253 : cephadm [INF] Deploying daemon iscsi.iscsi.b on vm05 2026-03-09T14:22:48.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:48 vm04 bash[19581]: audit 2026-03-09T14:22:47.786724+0000 mon.a (mon.0) 664 : audit [DBG] from='client.? 192.168.123.103:0/3424683558' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T14:22:48.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:48 vm04 bash[19581]: audit 2026-03-09T14:22:47.786724+0000 mon.a (mon.0) 664 : audit [DBG] from='client.? 192.168.123.103:0/3424683558' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T14:22:48.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:48 vm04 bash[19581]: audit 2026-03-09T14:22:47.980513+0000 mon.a (mon.0) 665 : audit [INF] from='client.? 192.168.123.103:0/65253590' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/3863832399"}]: dispatch 2026-03-09T14:22:48.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:48 vm04 bash[19581]: audit 2026-03-09T14:22:47.980513+0000 mon.a (mon.0) 665 : audit [INF] from='client.? 192.168.123.103:0/65253590' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/3863832399"}]: dispatch 2026-03-09T14:22:48.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:48 vm04 bash[19581]: audit 2026-03-09T14:22:48.250282+0000 mon.a (mon.0) 666 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:48.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:48 vm04 bash[19581]: audit 2026-03-09T14:22:48.250282+0000 mon.a (mon.0) 666 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:48.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:48 vm04 bash[19581]: audit 2026-03-09T14:22:48.255131+0000 mon.a (mon.0) 667 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:48.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:48 vm04 bash[19581]: audit 2026-03-09T14:22:48.255131+0000 mon.a (mon.0) 667 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:48.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:48 vm04 bash[19581]: audit 2026-03-09T14:22:48.259156+0000 mon.a (mon.0) 668 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:48.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:48 vm04 bash[19581]: audit 2026-03-09T14:22:48.259156+0000 mon.a (mon.0) 668 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:48.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:48 vm04 bash[19581]: audit 2026-03-09T14:22:48.269877+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:48.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:48 vm04 bash[19581]: audit 2026-03-09T14:22:48.269877+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:48.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:48 vm04 bash[19581]: audit 2026-03-09T14:22:48.285804+0000 mon.a (mon.0) 670 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:22:48.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:48 vm04 bash[19581]: audit 2026-03-09T14:22:48.285804+0000 mon.a (mon.0) 670 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:22:48.803 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:22:48 vm03 bash[37744]: debug Successfully removed blocklist entry 2026-03-09T14:22:48.803 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:22:48 vm03 bash[37744]: debug Removing blocklisted entry for this host : 192.168.123.103:0/2873272369 2026-03-09T14:22:48.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:48 vm03 bash[17524]: cluster 2026-03-09T14:22:46.947229+0000 mgr.x (mgr.14150) 252 : cluster [DBG] pgmap v219: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 260 B/s wr, 0 op/s 2026-03-09T14:22:48.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:48 vm03 bash[17524]: cluster 2026-03-09T14:22:46.947229+0000 mgr.x (mgr.14150) 252 : cluster [DBG] pgmap v219: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 260 B/s wr, 0 op/s 2026-03-09T14:22:48.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:48 vm03 bash[17524]: audit 2026-03-09T14:22:47.325342+0000 mon.a (mon.0) 658 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:48.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:48 vm03 bash[17524]: audit 2026-03-09T14:22:47.325342+0000 mon.a (mon.0) 658 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:48.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:48 vm03 bash[17524]: audit 2026-03-09T14:22:47.331290+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:48.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:48 vm03 bash[17524]: audit 2026-03-09T14:22:47.331290+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:48.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:48 vm03 bash[17524]: audit 2026-03-09T14:22:47.335605+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:48.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:48 vm03 bash[17524]: audit 2026-03-09T14:22:47.335605+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:48.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:48 vm03 bash[17524]: audit 2026-03-09T14:22:47.337523+0000 mon.a (mon.0) 661 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.b", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T14:22:48.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:48 vm03 bash[17524]: audit 2026-03-09T14:22:47.337523+0000 mon.a (mon.0) 661 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.b", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T14:22:48.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:48 vm03 bash[17524]: audit 2026-03-09T14:22:47.340356+0000 mon.a (mon.0) 662 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.b", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T14:22:48.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:48 vm03 bash[17524]: audit 2026-03-09T14:22:47.340356+0000 mon.a (mon.0) 662 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.b", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T14:22:48.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:48 vm03 bash[17524]: audit 2026-03-09T14:22:47.343741+0000 mon.a (mon.0) 663 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:48.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:48 vm03 bash[17524]: audit 2026-03-09T14:22:47.343741+0000 mon.a (mon.0) 663 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:48.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:48 vm03 bash[17524]: cephadm 2026-03-09T14:22:47.344471+0000 mgr.x (mgr.14150) 253 : cephadm [INF] Deploying daemon iscsi.iscsi.b on vm05 2026-03-09T14:22:48.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:48 vm03 bash[17524]: cephadm 2026-03-09T14:22:47.344471+0000 mgr.x (mgr.14150) 253 : cephadm [INF] Deploying daemon iscsi.iscsi.b on vm05 2026-03-09T14:22:48.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:48 vm03 bash[17524]: audit 2026-03-09T14:22:47.786724+0000 mon.a (mon.0) 664 : audit [DBG] from='client.? 192.168.123.103:0/3424683558' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T14:22:48.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:48 vm03 bash[17524]: audit 2026-03-09T14:22:47.786724+0000 mon.a (mon.0) 664 : audit [DBG] from='client.? 192.168.123.103:0/3424683558' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T14:22:48.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:48 vm03 bash[17524]: audit 2026-03-09T14:22:47.980513+0000 mon.a (mon.0) 665 : audit [INF] from='client.? 192.168.123.103:0/65253590' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/3863832399"}]: dispatch 2026-03-09T14:22:48.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:48 vm03 bash[17524]: audit 2026-03-09T14:22:47.980513+0000 mon.a (mon.0) 665 : audit [INF] from='client.? 192.168.123.103:0/65253590' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/3863832399"}]: dispatch 2026-03-09T14:22:48.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:48 vm03 bash[17524]: audit 2026-03-09T14:22:48.250282+0000 mon.a (mon.0) 666 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:48.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:48 vm03 bash[17524]: audit 2026-03-09T14:22:48.250282+0000 mon.a (mon.0) 666 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:48.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:48 vm03 bash[17524]: audit 2026-03-09T14:22:48.255131+0000 mon.a (mon.0) 667 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:48.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:48 vm03 bash[17524]: audit 2026-03-09T14:22:48.255131+0000 mon.a (mon.0) 667 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:48.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:48 vm03 bash[17524]: audit 2026-03-09T14:22:48.259156+0000 mon.a (mon.0) 668 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:48.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:48 vm03 bash[17524]: audit 2026-03-09T14:22:48.259156+0000 mon.a (mon.0) 668 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:48.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:48 vm03 bash[17524]: audit 2026-03-09T14:22:48.269877+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:48.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:48 vm03 bash[17524]: audit 2026-03-09T14:22:48.269877+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:48.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:48 vm03 bash[17524]: audit 2026-03-09T14:22:48.285804+0000 mon.a (mon.0) 670 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:22:48.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:48 vm03 bash[17524]: audit 2026-03-09T14:22:48.285804+0000 mon.a (mon.0) 670 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:22:49.008 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:22:48 vm05 bash[38699]: * Running on http://[::1]:5000/ (Press CTRL+C to quit) 2026-03-09T14:22:49.526 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:22:49 vm03 bash[37744]: debug Successfully removed blocklist entry 2026-03-09T14:22:49.526 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:22:49 vm03 bash[37744]: debug Removing blocklisted entry for this host : 192.168.123.103:0/134279448 2026-03-09T14:22:49.527 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:49 vm03 bash[17524]: cephadm 2026-03-09T14:22:48.259436+0000 mgr.x (mgr.14150) 254 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-09T14:22:49.527 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:49 vm03 bash[17524]: cephadm 2026-03-09T14:22:48.259436+0000 mgr.x (mgr.14150) 254 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-09T14:22:49.527 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:49 vm03 bash[17524]: audit 2026-03-09T14:22:48.438318+0000 mon.a (mon.0) 671 : audit [INF] from='client.? 192.168.123.103:0/65253590' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/3863832399"}]': finished 2026-03-09T14:22:49.527 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:49 vm03 bash[17524]: audit 2026-03-09T14:22:48.438318+0000 mon.a (mon.0) 671 : audit [INF] from='client.? 192.168.123.103:0/65253590' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/3863832399"}]': finished 2026-03-09T14:22:49.527 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:49 vm03 bash[17524]: cluster 2026-03-09T14:22:48.443777+0000 mon.a (mon.0) 672 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-09T14:22:49.527 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:49 vm03 bash[17524]: cluster 2026-03-09T14:22:48.443777+0000 mon.a (mon.0) 672 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-09T14:22:49.527 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:49 vm03 bash[17524]: audit 2026-03-09T14:22:48.620907+0000 mon.a (mon.0) 673 : audit [INF] from='client.? 192.168.123.103:0/1150770618' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/2873272369"}]: dispatch 2026-03-09T14:22:49.527 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:49 vm03 bash[17524]: audit 2026-03-09T14:22:48.620907+0000 mon.a (mon.0) 673 : audit [INF] from='client.? 192.168.123.103:0/1150770618' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/2873272369"}]: dispatch 2026-03-09T14:22:49.527 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:49 vm03 bash[17524]: audit 2026-03-09T14:22:48.758452+0000 mon.c (mon.1) 21 : audit [DBG] from='client.? 192.168.123.105:0/1990239314' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T14:22:49.527 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:49 vm03 bash[17524]: audit 2026-03-09T14:22:48.758452+0000 mon.c (mon.1) 21 : audit [DBG] from='client.? 192.168.123.105:0/1990239314' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T14:22:49.527 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:49 vm03 bash[17524]: audit 2026-03-09T14:22:49.257205+0000 mon.a (mon.0) 674 : audit [INF] from='client.? 192.168.123.103:0/1150770618' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/2873272369"}]': finished 2026-03-09T14:22:49.527 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:49 vm03 bash[17524]: audit 2026-03-09T14:22:49.257205+0000 mon.a (mon.0) 674 : audit [INF] from='client.? 192.168.123.103:0/1150770618' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/2873272369"}]': finished 2026-03-09T14:22:49.527 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:49 vm03 bash[17524]: cluster 2026-03-09T14:22:49.259215+0000 mon.a (mon.0) 675 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-09T14:22:49.527 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:49 vm03 bash[17524]: cluster 2026-03-09T14:22:49.259215+0000 mon.a (mon.0) 675 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-09T14:22:49.527 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:49 vm03 bash[17524]: audit 2026-03-09T14:22:49.430606+0000 mon.a (mon.0) 676 : audit [INF] from='client.? 192.168.123.103:0/41968234' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/134279448"}]: dispatch 2026-03-09T14:22:49.527 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:49 vm03 bash[17524]: audit 2026-03-09T14:22:49.430606+0000 mon.a (mon.0) 676 : audit [INF] from='client.? 192.168.123.103:0/41968234' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/134279448"}]: dispatch 2026-03-09T14:22:49.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:49 vm04 bash[19581]: cephadm 2026-03-09T14:22:48.259436+0000 mgr.x (mgr.14150) 254 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-09T14:22:49.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:49 vm04 bash[19581]: cephadm 2026-03-09T14:22:48.259436+0000 mgr.x (mgr.14150) 254 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-09T14:22:49.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:49 vm04 bash[19581]: audit 2026-03-09T14:22:48.438318+0000 mon.a (mon.0) 671 : audit [INF] from='client.? 192.168.123.103:0/65253590' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/3863832399"}]': finished 2026-03-09T14:22:49.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:49 vm04 bash[19581]: audit 2026-03-09T14:22:48.438318+0000 mon.a (mon.0) 671 : audit [INF] from='client.? 192.168.123.103:0/65253590' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/3863832399"}]': finished 2026-03-09T14:22:49.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:49 vm04 bash[19581]: cluster 2026-03-09T14:22:48.443777+0000 mon.a (mon.0) 672 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-09T14:22:49.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:49 vm04 bash[19581]: cluster 2026-03-09T14:22:48.443777+0000 mon.a (mon.0) 672 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-09T14:22:49.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:49 vm04 bash[19581]: audit 2026-03-09T14:22:48.620907+0000 mon.a (mon.0) 673 : audit [INF] from='client.? 192.168.123.103:0/1150770618' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/2873272369"}]: dispatch 2026-03-09T14:22:49.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:49 vm04 bash[19581]: audit 2026-03-09T14:22:48.620907+0000 mon.a (mon.0) 673 : audit [INF] from='client.? 192.168.123.103:0/1150770618' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/2873272369"}]: dispatch 2026-03-09T14:22:49.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:49 vm04 bash[19581]: audit 2026-03-09T14:22:48.758452+0000 mon.c (mon.1) 21 : audit [DBG] from='client.? 192.168.123.105:0/1990239314' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T14:22:49.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:49 vm04 bash[19581]: audit 2026-03-09T14:22:48.758452+0000 mon.c (mon.1) 21 : audit [DBG] from='client.? 192.168.123.105:0/1990239314' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T14:22:49.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:49 vm04 bash[19581]: audit 2026-03-09T14:22:49.257205+0000 mon.a (mon.0) 674 : audit [INF] from='client.? 192.168.123.103:0/1150770618' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/2873272369"}]': finished 2026-03-09T14:22:49.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:49 vm04 bash[19581]: audit 2026-03-09T14:22:49.257205+0000 mon.a (mon.0) 674 : audit [INF] from='client.? 192.168.123.103:0/1150770618' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/2873272369"}]': finished 2026-03-09T14:22:49.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:49 vm04 bash[19581]: cluster 2026-03-09T14:22:49.259215+0000 mon.a (mon.0) 675 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-09T14:22:49.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:49 vm04 bash[19581]: cluster 2026-03-09T14:22:49.259215+0000 mon.a (mon.0) 675 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-09T14:22:49.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:49 vm04 bash[19581]: audit 2026-03-09T14:22:49.430606+0000 mon.a (mon.0) 676 : audit [INF] from='client.? 192.168.123.103:0/41968234' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/134279448"}]: dispatch 2026-03-09T14:22:49.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:49 vm04 bash[19581]: audit 2026-03-09T14:22:49.430606+0000 mon.a (mon.0) 676 : audit [INF] from='client.? 192.168.123.103:0/41968234' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/134279448"}]: dispatch 2026-03-09T14:22:49.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:49 vm05 bash[20070]: cephadm 2026-03-09T14:22:48.259436+0000 mgr.x (mgr.14150) 254 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-09T14:22:49.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:49 vm05 bash[20070]: cephadm 2026-03-09T14:22:48.259436+0000 mgr.x (mgr.14150) 254 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-09T14:22:49.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:49 vm05 bash[20070]: audit 2026-03-09T14:22:48.438318+0000 mon.a (mon.0) 671 : audit [INF] from='client.? 192.168.123.103:0/65253590' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/3863832399"}]': finished 2026-03-09T14:22:49.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:49 vm05 bash[20070]: audit 2026-03-09T14:22:48.438318+0000 mon.a (mon.0) 671 : audit [INF] from='client.? 192.168.123.103:0/65253590' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/3863832399"}]': finished 2026-03-09T14:22:49.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:49 vm05 bash[20070]: cluster 2026-03-09T14:22:48.443777+0000 mon.a (mon.0) 672 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-09T14:22:49.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:49 vm05 bash[20070]: cluster 2026-03-09T14:22:48.443777+0000 mon.a (mon.0) 672 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-09T14:22:49.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:49 vm05 bash[20070]: audit 2026-03-09T14:22:48.620907+0000 mon.a (mon.0) 673 : audit [INF] from='client.? 192.168.123.103:0/1150770618' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/2873272369"}]: dispatch 2026-03-09T14:22:49.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:49 vm05 bash[20070]: audit 2026-03-09T14:22:48.620907+0000 mon.a (mon.0) 673 : audit [INF] from='client.? 192.168.123.103:0/1150770618' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/2873272369"}]: dispatch 2026-03-09T14:22:49.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:49 vm05 bash[20070]: audit 2026-03-09T14:22:48.758452+0000 mon.c (mon.1) 21 : audit [DBG] from='client.? 192.168.123.105:0/1990239314' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T14:22:49.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:49 vm05 bash[20070]: audit 2026-03-09T14:22:48.758452+0000 mon.c (mon.1) 21 : audit [DBG] from='client.? 192.168.123.105:0/1990239314' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T14:22:49.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:49 vm05 bash[20070]: audit 2026-03-09T14:22:49.257205+0000 mon.a (mon.0) 674 : audit [INF] from='client.? 192.168.123.103:0/1150770618' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/2873272369"}]': finished 2026-03-09T14:22:49.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:49 vm05 bash[20070]: audit 2026-03-09T14:22:49.257205+0000 mon.a (mon.0) 674 : audit [INF] from='client.? 192.168.123.103:0/1150770618' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/2873272369"}]': finished 2026-03-09T14:22:49.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:49 vm05 bash[20070]: cluster 2026-03-09T14:22:49.259215+0000 mon.a (mon.0) 675 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-09T14:22:49.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:49 vm05 bash[20070]: cluster 2026-03-09T14:22:49.259215+0000 mon.a (mon.0) 675 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-09T14:22:49.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:49 vm05 bash[20070]: audit 2026-03-09T14:22:49.430606+0000 mon.a (mon.0) 676 : audit [INF] from='client.? 192.168.123.103:0/41968234' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/134279448"}]: dispatch 2026-03-09T14:22:49.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:49 vm05 bash[20070]: audit 2026-03-09T14:22:49.430606+0000 mon.a (mon.0) 676 : audit [INF] from='client.? 192.168.123.103:0/41968234' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/134279448"}]: dispatch 2026-03-09T14:22:50.553 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:22:50 vm03 bash[37744]: debug Successfully removed blocklist entry 2026-03-09T14:22:50.553 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:22:50 vm03 bash[37744]: debug Removing blocklisted entry for this host : 192.168.123.103:0/2194687713 2026-03-09T14:22:50.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:50 vm03 bash[17524]: cluster 2026-03-09T14:22:48.947530+0000 mgr.x (mgr.14150) 255 : cluster [DBG] pgmap v221: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-09T14:22:50.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:50 vm03 bash[17524]: cluster 2026-03-09T14:22:48.947530+0000 mgr.x (mgr.14150) 255 : cluster [DBG] pgmap v221: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-09T14:22:50.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:50 vm03 bash[17524]: cluster 2026-03-09T14:22:49.453155+0000 mon.a (mon.0) 677 : cluster [DBG] mgrmap e14: x(active, since 5m) 2026-03-09T14:22:50.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:50 vm03 bash[17524]: cluster 2026-03-09T14:22:49.453155+0000 mon.a (mon.0) 677 : cluster [DBG] mgrmap e14: x(active, since 5m) 2026-03-09T14:22:50.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:50 vm03 bash[17524]: audit 2026-03-09T14:22:50.263105+0000 mon.a (mon.0) 678 : audit [INF] from='client.? 192.168.123.103:0/41968234' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/134279448"}]': finished 2026-03-09T14:22:50.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:50 vm03 bash[17524]: audit 2026-03-09T14:22:50.263105+0000 mon.a (mon.0) 678 : audit [INF] from='client.? 192.168.123.103:0/41968234' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/134279448"}]': finished 2026-03-09T14:22:50.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:50 vm03 bash[17524]: cluster 2026-03-09T14:22:50.265544+0000 mon.a (mon.0) 679 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-09T14:22:50.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:50 vm03 bash[17524]: cluster 2026-03-09T14:22:50.265544+0000 mon.a (mon.0) 679 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-09T14:22:50.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:50 vm03 bash[17524]: audit 2026-03-09T14:22:50.434647+0000 mon.b (mon.2) 18 : audit [INF] from='client.? 192.168.123.103:0/1635059016' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/2194687713"}]: dispatch 2026-03-09T14:22:50.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:50 vm03 bash[17524]: audit 2026-03-09T14:22:50.434647+0000 mon.b (mon.2) 18 : audit [INF] from='client.? 192.168.123.103:0/1635059016' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/2194687713"}]: dispatch 2026-03-09T14:22:50.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:50 vm03 bash[17524]: audit 2026-03-09T14:22:50.435595+0000 mon.a (mon.0) 680 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/2194687713"}]: dispatch 2026-03-09T14:22:50.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:50 vm03 bash[17524]: audit 2026-03-09T14:22:50.435595+0000 mon.a (mon.0) 680 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/2194687713"}]: dispatch 2026-03-09T14:22:50.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:50 vm04 bash[19581]: cluster 2026-03-09T14:22:48.947530+0000 mgr.x (mgr.14150) 255 : cluster [DBG] pgmap v221: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-09T14:22:50.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:50 vm04 bash[19581]: cluster 2026-03-09T14:22:48.947530+0000 mgr.x (mgr.14150) 255 : cluster [DBG] pgmap v221: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-09T14:22:50.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:50 vm04 bash[19581]: cluster 2026-03-09T14:22:49.453155+0000 mon.a (mon.0) 677 : cluster [DBG] mgrmap e14: x(active, since 5m) 2026-03-09T14:22:50.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:50 vm04 bash[19581]: cluster 2026-03-09T14:22:49.453155+0000 mon.a (mon.0) 677 : cluster [DBG] mgrmap e14: x(active, since 5m) 2026-03-09T14:22:50.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:50 vm04 bash[19581]: audit 2026-03-09T14:22:50.263105+0000 mon.a (mon.0) 678 : audit [INF] from='client.? 192.168.123.103:0/41968234' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/134279448"}]': finished 2026-03-09T14:22:50.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:50 vm04 bash[19581]: audit 2026-03-09T14:22:50.263105+0000 mon.a (mon.0) 678 : audit [INF] from='client.? 192.168.123.103:0/41968234' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/134279448"}]': finished 2026-03-09T14:22:50.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:50 vm04 bash[19581]: cluster 2026-03-09T14:22:50.265544+0000 mon.a (mon.0) 679 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-09T14:22:50.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:50 vm04 bash[19581]: cluster 2026-03-09T14:22:50.265544+0000 mon.a (mon.0) 679 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-09T14:22:50.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:50 vm04 bash[19581]: audit 2026-03-09T14:22:50.434647+0000 mon.b (mon.2) 18 : audit [INF] from='client.? 192.168.123.103:0/1635059016' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/2194687713"}]: dispatch 2026-03-09T14:22:50.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:50 vm04 bash[19581]: audit 2026-03-09T14:22:50.434647+0000 mon.b (mon.2) 18 : audit [INF] from='client.? 192.168.123.103:0/1635059016' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/2194687713"}]: dispatch 2026-03-09T14:22:50.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:50 vm04 bash[19581]: audit 2026-03-09T14:22:50.435595+0000 mon.a (mon.0) 680 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/2194687713"}]: dispatch 2026-03-09T14:22:50.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:50 vm04 bash[19581]: audit 2026-03-09T14:22:50.435595+0000 mon.a (mon.0) 680 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/2194687713"}]: dispatch 2026-03-09T14:22:50.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:50 vm05 bash[20070]: cluster 2026-03-09T14:22:48.947530+0000 mgr.x (mgr.14150) 255 : cluster [DBG] pgmap v221: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-09T14:22:50.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:50 vm05 bash[20070]: cluster 2026-03-09T14:22:48.947530+0000 mgr.x (mgr.14150) 255 : cluster [DBG] pgmap v221: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-09T14:22:50.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:50 vm05 bash[20070]: cluster 2026-03-09T14:22:49.453155+0000 mon.a (mon.0) 677 : cluster [DBG] mgrmap e14: x(active, since 5m) 2026-03-09T14:22:50.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:50 vm05 bash[20070]: cluster 2026-03-09T14:22:49.453155+0000 mon.a (mon.0) 677 : cluster [DBG] mgrmap e14: x(active, since 5m) 2026-03-09T14:22:50.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:50 vm05 bash[20070]: audit 2026-03-09T14:22:50.263105+0000 mon.a (mon.0) 678 : audit [INF] from='client.? 192.168.123.103:0/41968234' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/134279448"}]': finished 2026-03-09T14:22:50.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:50 vm05 bash[20070]: audit 2026-03-09T14:22:50.263105+0000 mon.a (mon.0) 678 : audit [INF] from='client.? 192.168.123.103:0/41968234' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/134279448"}]': finished 2026-03-09T14:22:50.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:50 vm05 bash[20070]: cluster 2026-03-09T14:22:50.265544+0000 mon.a (mon.0) 679 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-09T14:22:50.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:50 vm05 bash[20070]: cluster 2026-03-09T14:22:50.265544+0000 mon.a (mon.0) 679 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-09T14:22:50.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:50 vm05 bash[20070]: audit 2026-03-09T14:22:50.434647+0000 mon.b (mon.2) 18 : audit [INF] from='client.? 192.168.123.103:0/1635059016' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/2194687713"}]: dispatch 2026-03-09T14:22:50.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:50 vm05 bash[20070]: audit 2026-03-09T14:22:50.434647+0000 mon.b (mon.2) 18 : audit [INF] from='client.? 192.168.123.103:0/1635059016' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/2194687713"}]: dispatch 2026-03-09T14:22:50.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:50 vm05 bash[20070]: audit 2026-03-09T14:22:50.435595+0000 mon.a (mon.0) 680 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/2194687713"}]: dispatch 2026-03-09T14:22:50.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:50 vm05 bash[20070]: audit 2026-03-09T14:22:50.435595+0000 mon.a (mon.0) 680 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/2194687713"}]: dispatch 2026-03-09T14:22:51.553 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:22:51 vm03 bash[37744]: debug Successfully removed blocklist entry 2026-03-09T14:22:51.553 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:22:51 vm03 bash[37744]: debug Removing blocklisted entry for this host : 192.168.123.103:6801/623478427 2026-03-09T14:22:51.882 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:22:52.167 INFO:teuthology.orchestra.run.vm03.stdout:[client.0] 2026-03-09T14:22:52.167 INFO:teuthology.orchestra.run.vm03.stdout: key = AQC8165p+4q6CRAAV4ynJq56MW13Tk/vDXcLaQ== 2026-03-09T14:22:52.177 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:51 vm03 bash[17524]: audit 2026-03-09T14:22:50.805306+0000 mon.a (mon.0) 681 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:52.177 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:51 vm03 bash[17524]: audit 2026-03-09T14:22:50.805306+0000 mon.a (mon.0) 681 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:52.177 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:51 vm03 bash[17524]: cluster 2026-03-09T14:22:50.947811+0000 mgr.x (mgr.14150) 256 : cluster [DBG] pgmap v224: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 170 B/s wr, 2 op/s 2026-03-09T14:22:52.177 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:51 vm03 bash[17524]: cluster 2026-03-09T14:22:50.947811+0000 mgr.x (mgr.14150) 256 : cluster [DBG] pgmap v224: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 170 B/s wr, 2 op/s 2026-03-09T14:22:52.177 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:51 vm03 bash[17524]: audit 2026-03-09T14:22:51.266239+0000 mon.a (mon.0) 682 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/2194687713"}]': finished 2026-03-09T14:22:52.177 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:51 vm03 bash[17524]: audit 2026-03-09T14:22:51.266239+0000 mon.a (mon.0) 682 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/2194687713"}]': finished 2026-03-09T14:22:52.177 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:51 vm03 bash[17524]: cluster 2026-03-09T14:22:51.268873+0000 mon.a (mon.0) 683 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-09T14:22:52.177 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:51 vm03 bash[17524]: cluster 2026-03-09T14:22:51.268873+0000 mon.a (mon.0) 683 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-09T14:22:52.177 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:51 vm03 bash[17524]: audit 2026-03-09T14:22:51.441298+0000 mon.a (mon.0) 684 : audit [INF] from='client.? 192.168.123.103:0/3970181762' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6801/623478427"}]: dispatch 2026-03-09T14:22:52.177 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:51 vm03 bash[17524]: audit 2026-03-09T14:22:51.441298+0000 mon.a (mon.0) 684 : audit [INF] from='client.? 192.168.123.103:0/3970181762' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6801/623478427"}]: dispatch 2026-03-09T14:22:52.222 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T14:22:52.222 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/ceph/ceph.client.0.keyring 2026-03-09T14:22:52.222 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-03-09T14:22:52.234 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph auth get-or-create client.1 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-09T14:22:52.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:51 vm04 bash[19581]: audit 2026-03-09T14:22:50.805306+0000 mon.a (mon.0) 681 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:52.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:51 vm04 bash[19581]: audit 2026-03-09T14:22:50.805306+0000 mon.a (mon.0) 681 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:52.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:51 vm04 bash[19581]: cluster 2026-03-09T14:22:50.947811+0000 mgr.x (mgr.14150) 256 : cluster [DBG] pgmap v224: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 170 B/s wr, 2 op/s 2026-03-09T14:22:52.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:51 vm04 bash[19581]: cluster 2026-03-09T14:22:50.947811+0000 mgr.x (mgr.14150) 256 : cluster [DBG] pgmap v224: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 170 B/s wr, 2 op/s 2026-03-09T14:22:52.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:51 vm04 bash[19581]: audit 2026-03-09T14:22:51.266239+0000 mon.a (mon.0) 682 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/2194687713"}]': finished 2026-03-09T14:22:52.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:51 vm04 bash[19581]: audit 2026-03-09T14:22:51.266239+0000 mon.a (mon.0) 682 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/2194687713"}]': finished 2026-03-09T14:22:52.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:51 vm04 bash[19581]: cluster 2026-03-09T14:22:51.268873+0000 mon.a (mon.0) 683 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-09T14:22:52.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:51 vm04 bash[19581]: cluster 2026-03-09T14:22:51.268873+0000 mon.a (mon.0) 683 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-09T14:22:52.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:51 vm04 bash[19581]: audit 2026-03-09T14:22:51.441298+0000 mon.a (mon.0) 684 : audit [INF] from='client.? 192.168.123.103:0/3970181762' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6801/623478427"}]: dispatch 2026-03-09T14:22:52.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:51 vm04 bash[19581]: audit 2026-03-09T14:22:51.441298+0000 mon.a (mon.0) 684 : audit [INF] from='client.? 192.168.123.103:0/3970181762' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6801/623478427"}]: dispatch 2026-03-09T14:22:52.258 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:51 vm05 bash[20070]: audit 2026-03-09T14:22:50.805306+0000 mon.a (mon.0) 681 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:52.258 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:51 vm05 bash[20070]: audit 2026-03-09T14:22:50.805306+0000 mon.a (mon.0) 681 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:52.258 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:51 vm05 bash[20070]: cluster 2026-03-09T14:22:50.947811+0000 mgr.x (mgr.14150) 256 : cluster [DBG] pgmap v224: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 170 B/s wr, 2 op/s 2026-03-09T14:22:52.258 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:51 vm05 bash[20070]: cluster 2026-03-09T14:22:50.947811+0000 mgr.x (mgr.14150) 256 : cluster [DBG] pgmap v224: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 170 B/s wr, 2 op/s 2026-03-09T14:22:52.258 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:51 vm05 bash[20070]: audit 2026-03-09T14:22:51.266239+0000 mon.a (mon.0) 682 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/2194687713"}]': finished 2026-03-09T14:22:52.258 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:51 vm05 bash[20070]: audit 2026-03-09T14:22:51.266239+0000 mon.a (mon.0) 682 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/2194687713"}]': finished 2026-03-09T14:22:52.258 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:51 vm05 bash[20070]: cluster 2026-03-09T14:22:51.268873+0000 mon.a (mon.0) 683 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-09T14:22:52.258 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:51 vm05 bash[20070]: cluster 2026-03-09T14:22:51.268873+0000 mon.a (mon.0) 683 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-09T14:22:52.258 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:51 vm05 bash[20070]: audit 2026-03-09T14:22:51.441298+0000 mon.a (mon.0) 684 : audit [INF] from='client.? 192.168.123.103:0/3970181762' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6801/623478427"}]: dispatch 2026-03-09T14:22:52.258 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:51 vm05 bash[20070]: audit 2026-03-09T14:22:51.441298+0000 mon.a (mon.0) 684 : audit [INF] from='client.? 192.168.123.103:0/3970181762' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6801/623478427"}]: dispatch 2026-03-09T14:22:52.553 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:22:52 vm03 bash[37744]: debug Successfully removed blocklist entry 2026-03-09T14:22:52.553 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:22:52 vm03 bash[37744]: debug Removing blocklisted entry for this host : 192.168.123.103:6801/2250683817 2026-03-09T14:22:53.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:52 vm04 bash[19581]: audit 2026-03-09T14:22:52.163129+0000 mon.a (mon.0) 685 : audit [INF] from='client.? 192.168.123.103:0/2052936160' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:22:53.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:52 vm04 bash[19581]: audit 2026-03-09T14:22:52.163129+0000 mon.a (mon.0) 685 : audit [INF] from='client.? 192.168.123.103:0/2052936160' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:22:53.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:52 vm04 bash[19581]: audit 2026-03-09T14:22:52.165779+0000 mon.a (mon.0) 686 : audit [INF] from='client.? 192.168.123.103:0/2052936160' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T14:22:53.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:52 vm04 bash[19581]: audit 2026-03-09T14:22:52.165779+0000 mon.a (mon.0) 686 : audit [INF] from='client.? 192.168.123.103:0/2052936160' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T14:22:53.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:52 vm04 bash[19581]: audit 2026-03-09T14:22:52.269551+0000 mon.a (mon.0) 687 : audit [INF] from='client.? 192.168.123.103:0/3970181762' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6801/623478427"}]': finished 2026-03-09T14:22:53.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:52 vm04 bash[19581]: audit 2026-03-09T14:22:52.269551+0000 mon.a (mon.0) 687 : audit [INF] from='client.? 192.168.123.103:0/3970181762' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6801/623478427"}]': finished 2026-03-09T14:22:53.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:52 vm04 bash[19581]: cluster 2026-03-09T14:22:52.275822+0000 mon.a (mon.0) 688 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-09T14:22:53.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:52 vm04 bash[19581]: cluster 2026-03-09T14:22:52.275822+0000 mon.a (mon.0) 688 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-09T14:22:53.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:52 vm04 bash[19581]: audit 2026-03-09T14:22:52.443205+0000 mon.a (mon.0) 689 : audit [INF] from='client.? 192.168.123.103:0/4254072755' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6801/2250683817"}]: dispatch 2026-03-09T14:22:53.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:52 vm04 bash[19581]: audit 2026-03-09T14:22:52.443205+0000 mon.a (mon.0) 689 : audit [INF] from='client.? 192.168.123.103:0/4254072755' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6801/2250683817"}]: dispatch 2026-03-09T14:22:53.258 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:52 vm05 bash[20070]: audit 2026-03-09T14:22:52.163129+0000 mon.a (mon.0) 685 : audit [INF] from='client.? 192.168.123.103:0/2052936160' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:22:53.258 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:52 vm05 bash[20070]: audit 2026-03-09T14:22:52.163129+0000 mon.a (mon.0) 685 : audit [INF] from='client.? 192.168.123.103:0/2052936160' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:22:53.258 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:52 vm05 bash[20070]: audit 2026-03-09T14:22:52.165779+0000 mon.a (mon.0) 686 : audit [INF] from='client.? 192.168.123.103:0/2052936160' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T14:22:53.258 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:52 vm05 bash[20070]: audit 2026-03-09T14:22:52.165779+0000 mon.a (mon.0) 686 : audit [INF] from='client.? 192.168.123.103:0/2052936160' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T14:22:53.258 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:52 vm05 bash[20070]: audit 2026-03-09T14:22:52.269551+0000 mon.a (mon.0) 687 : audit [INF] from='client.? 192.168.123.103:0/3970181762' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6801/623478427"}]': finished 2026-03-09T14:22:53.258 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:52 vm05 bash[20070]: audit 2026-03-09T14:22:52.269551+0000 mon.a (mon.0) 687 : audit [INF] from='client.? 192.168.123.103:0/3970181762' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6801/623478427"}]': finished 2026-03-09T14:22:53.258 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:52 vm05 bash[20070]: cluster 2026-03-09T14:22:52.275822+0000 mon.a (mon.0) 688 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-09T14:22:53.258 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:52 vm05 bash[20070]: cluster 2026-03-09T14:22:52.275822+0000 mon.a (mon.0) 688 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-09T14:22:53.258 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:52 vm05 bash[20070]: audit 2026-03-09T14:22:52.443205+0000 mon.a (mon.0) 689 : audit [INF] from='client.? 192.168.123.103:0/4254072755' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6801/2250683817"}]: dispatch 2026-03-09T14:22:53.258 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:52 vm05 bash[20070]: audit 2026-03-09T14:22:52.443205+0000 mon.a (mon.0) 689 : audit [INF] from='client.? 192.168.123.103:0/4254072755' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6801/2250683817"}]: dispatch 2026-03-09T14:22:53.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:52 vm03 bash[17524]: audit 2026-03-09T14:22:52.163129+0000 mon.a (mon.0) 685 : audit [INF] from='client.? 192.168.123.103:0/2052936160' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:22:53.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:52 vm03 bash[17524]: audit 2026-03-09T14:22:52.163129+0000 mon.a (mon.0) 685 : audit [INF] from='client.? 192.168.123.103:0/2052936160' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:22:53.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:52 vm03 bash[17524]: audit 2026-03-09T14:22:52.165779+0000 mon.a (mon.0) 686 : audit [INF] from='client.? 192.168.123.103:0/2052936160' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T14:22:53.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:52 vm03 bash[17524]: audit 2026-03-09T14:22:52.165779+0000 mon.a (mon.0) 686 : audit [INF] from='client.? 192.168.123.103:0/2052936160' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T14:22:53.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:52 vm03 bash[17524]: audit 2026-03-09T14:22:52.269551+0000 mon.a (mon.0) 687 : audit [INF] from='client.? 192.168.123.103:0/3970181762' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6801/623478427"}]': finished 2026-03-09T14:22:53.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:52 vm03 bash[17524]: audit 2026-03-09T14:22:52.269551+0000 mon.a (mon.0) 687 : audit [INF] from='client.? 192.168.123.103:0/3970181762' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6801/623478427"}]': finished 2026-03-09T14:22:53.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:52 vm03 bash[17524]: cluster 2026-03-09T14:22:52.275822+0000 mon.a (mon.0) 688 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-09T14:22:53.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:52 vm03 bash[17524]: cluster 2026-03-09T14:22:52.275822+0000 mon.a (mon.0) 688 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-09T14:22:53.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:52 vm03 bash[17524]: audit 2026-03-09T14:22:52.443205+0000 mon.a (mon.0) 689 : audit [INF] from='client.? 192.168.123.103:0/4254072755' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6801/2250683817"}]: dispatch 2026-03-09T14:22:53.286 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:52 vm03 bash[17524]: audit 2026-03-09T14:22:52.443205+0000 mon.a (mon.0) 689 : audit [INF] from='client.? 192.168.123.103:0/4254072755' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6801/2250683817"}]: dispatch 2026-03-09T14:22:53.553 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:22:53 vm03 bash[37744]: debug Successfully removed blocklist entry 2026-03-09T14:22:53.554 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:22:53 vm03 bash[37744]: debug Removing blocklisted entry for this host : 192.168.123.103:6800/623478427 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: cluster 2026-03-09T14:22:52.948094+0000 mgr.x (mgr.14150) 257 : cluster [DBG] pgmap v227: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 255 B/s wr, 3 op/s 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: cluster 2026-03-09T14:22:52.948094+0000 mgr.x (mgr.14150) 257 : cluster [DBG] pgmap v227: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 255 B/s wr, 3 op/s 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.133440+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.133440+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.137905+0000 mon.a (mon.0) 691 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.137905+0000 mon.a (mon.0) 691 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.272493+0000 mon.a (mon.0) 692 : audit [INF] from='client.? 192.168.123.103:0/4254072755' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6801/2250683817"}]': finished 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.272493+0000 mon.a (mon.0) 692 : audit [INF] from='client.? 192.168.123.103:0/4254072755' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6801/2250683817"}]': finished 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: cluster 2026-03-09T14:22:53.274843+0000 mon.a (mon.0) 693 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: cluster 2026-03-09T14:22:53.274843+0000 mon.a (mon.0) 693 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.323112+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.323112+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.327677+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.327677+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.328341+0000 mon.a (mon.0) 696 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.328341+0000 mon.a (mon.0) 696 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.328956+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.328956+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.332795+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.332795+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.351673+0000 mon.a (mon.0) 699 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.351673+0000 mon.a (mon.0) 699 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.354591+0000 mgr.x (mgr.14150) 258 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.354591+0000 mgr.x (mgr.14150) 258 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: cephadm 2026-03-09T14:22:53.355305+0000 mgr.x (mgr.14150) 259 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.103:5000 to Dashboard 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: cephadm 2026-03-09T14:22:53.355305+0000 mgr.x (mgr.14150) 259 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.103:5000 to Dashboard 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: cephadm 2026-03-09T14:22:53.355337+0000 mgr.x (mgr.14150) 260 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.105:5000 to Dashboard 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: cephadm 2026-03-09T14:22:53.355337+0000 mgr.x (mgr.14150) 260 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.105:5000 to Dashboard 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.355465+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.355465+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.355648+0000 mgr.x (mgr.14150) 261 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.355648+0000 mgr.x (mgr.14150) 261 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.359036+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.359036+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.364499+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm03"}]: dispatch 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.364499+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm03"}]: dispatch 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.364824+0000 mgr.x (mgr.14150) 262 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm03"}]: dispatch 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.364824+0000 mgr.x (mgr.14150) 262 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm03"}]: dispatch 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.368034+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.368034+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.368890+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.368890+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.369191+0000 mgr.x (mgr.14150) 263 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.369191+0000 mgr.x (mgr.14150) 263 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-09T14:22:54.397 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.374345+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.398 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.374345+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.398 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.375519+0000 mon.a (mon.0) 706 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:22:54.398 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.375519+0000 mon.a (mon.0) 706 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:22:54.398 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.376370+0000 mon.a (mon.0) 707 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:54.398 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.376370+0000 mon.a (mon.0) 707 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:54.398 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.377007+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:22:54.398 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.377007+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:22:54.398 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.381339+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.398 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.381339+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.398 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.474158+0000 mon.a (mon.0) 710 : audit [INF] from='client.? 192.168.123.103:0/2079746259' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6800/623478427"}]: dispatch 2026-03-09T14:22:54.398 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[17524]: audit 2026-03-09T14:22:53.474158+0000 mon.a (mon.0) 710 : audit [INF] from='client.? 192.168.123.103:0/2079746259' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6800/623478427"}]: dispatch 2026-03-09T14:22:54.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: cluster 2026-03-09T14:22:52.948094+0000 mgr.x (mgr.14150) 257 : cluster [DBG] pgmap v227: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 255 B/s wr, 3 op/s 2026-03-09T14:22:54.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: cluster 2026-03-09T14:22:52.948094+0000 mgr.x (mgr.14150) 257 : cluster [DBG] pgmap v227: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 255 B/s wr, 3 op/s 2026-03-09T14:22:54.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.133440+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.133440+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.137905+0000 mon.a (mon.0) 691 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.137905+0000 mon.a (mon.0) 691 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.272493+0000 mon.a (mon.0) 692 : audit [INF] from='client.? 192.168.123.103:0/4254072755' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6801/2250683817"}]': finished 2026-03-09T14:22:54.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.272493+0000 mon.a (mon.0) 692 : audit [INF] from='client.? 192.168.123.103:0/4254072755' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6801/2250683817"}]': finished 2026-03-09T14:22:54.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: cluster 2026-03-09T14:22:53.274843+0000 mon.a (mon.0) 693 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-09T14:22:54.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: cluster 2026-03-09T14:22:53.274843+0000 mon.a (mon.0) 693 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-09T14:22:54.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.323112+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.323112+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.327677+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.327677+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.328341+0000 mon.a (mon.0) 696 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.328341+0000 mon.a (mon.0) 696 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.328956+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.328956+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.332795+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.332795+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.351673+0000 mon.a (mon.0) 699 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.351673+0000 mon.a (mon.0) 699 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.354591+0000 mgr.x (mgr.14150) 258 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.354591+0000 mgr.x (mgr.14150) 258 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: cephadm 2026-03-09T14:22:53.355305+0000 mgr.x (mgr.14150) 259 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.103:5000 to Dashboard 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: cephadm 2026-03-09T14:22:53.355305+0000 mgr.x (mgr.14150) 259 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.103:5000 to Dashboard 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: cephadm 2026-03-09T14:22:53.355337+0000 mgr.x (mgr.14150) 260 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.105:5000 to Dashboard 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: cephadm 2026-03-09T14:22:53.355337+0000 mgr.x (mgr.14150) 260 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.105:5000 to Dashboard 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.355465+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.355465+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.355648+0000 mgr.x (mgr.14150) 261 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.355648+0000 mgr.x (mgr.14150) 261 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.359036+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.359036+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.364499+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm03"}]: dispatch 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.364499+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm03"}]: dispatch 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.364824+0000 mgr.x (mgr.14150) 262 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm03"}]: dispatch 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.364824+0000 mgr.x (mgr.14150) 262 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm03"}]: dispatch 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.368034+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.368034+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.368890+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.368890+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.369191+0000 mgr.x (mgr.14150) 263 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.369191+0000 mgr.x (mgr.14150) 263 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.374345+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.374345+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.375519+0000 mon.a (mon.0) 706 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.375519+0000 mon.a (mon.0) 706 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.376370+0000 mon.a (mon.0) 707 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.376370+0000 mon.a (mon.0) 707 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.377007+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.377007+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.381339+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.381339+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.474158+0000 mon.a (mon.0) 710 : audit [INF] from='client.? 192.168.123.103:0/2079746259' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6800/623478427"}]: dispatch 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:54 vm04 bash[19581]: audit 2026-03-09T14:22:53.474158+0000 mon.a (mon.0) 710 : audit [INF] from='client.? 192.168.123.103:0/2079746259' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6800/623478427"}]: dispatch 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: cluster 2026-03-09T14:22:52.948094+0000 mgr.x (mgr.14150) 257 : cluster [DBG] pgmap v227: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 255 B/s wr, 3 op/s 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: cluster 2026-03-09T14:22:52.948094+0000 mgr.x (mgr.14150) 257 : cluster [DBG] pgmap v227: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 255 B/s wr, 3 op/s 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.133440+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.133440+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.137905+0000 mon.a (mon.0) 691 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.137905+0000 mon.a (mon.0) 691 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.272493+0000 mon.a (mon.0) 692 : audit [INF] from='client.? 192.168.123.103:0/4254072755' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6801/2250683817"}]': finished 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.272493+0000 mon.a (mon.0) 692 : audit [INF] from='client.? 192.168.123.103:0/4254072755' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6801/2250683817"}]': finished 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: cluster 2026-03-09T14:22:53.274843+0000 mon.a (mon.0) 693 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: cluster 2026-03-09T14:22:53.274843+0000 mon.a (mon.0) 693 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.323112+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.323112+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.327677+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.327677+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.328341+0000 mon.a (mon.0) 696 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.328341+0000 mon.a (mon.0) 696 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.328956+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.328956+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.332795+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.332795+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.351673+0000 mon.a (mon.0) 699 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.351673+0000 mon.a (mon.0) 699 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.354591+0000 mgr.x (mgr.14150) 258 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.354591+0000 mgr.x (mgr.14150) 258 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: cephadm 2026-03-09T14:22:53.355305+0000 mgr.x (mgr.14150) 259 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.103:5000 to Dashboard 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: cephadm 2026-03-09T14:22:53.355305+0000 mgr.x (mgr.14150) 259 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.103:5000 to Dashboard 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: cephadm 2026-03-09T14:22:53.355337+0000 mgr.x (mgr.14150) 260 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.105:5000 to Dashboard 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: cephadm 2026-03-09T14:22:53.355337+0000 mgr.x (mgr.14150) 260 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.105:5000 to Dashboard 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.355465+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.355465+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.355648+0000 mgr.x (mgr.14150) 261 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.355648+0000 mgr.x (mgr.14150) 261 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.359036+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.359036+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.364499+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm03"}]: dispatch 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.364499+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm03"}]: dispatch 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.364824+0000 mgr.x (mgr.14150) 262 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm03"}]: dispatch 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.364824+0000 mgr.x (mgr.14150) 262 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm03"}]: dispatch 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.368034+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.368034+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.368890+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.368890+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.369191+0000 mgr.x (mgr.14150) 263 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.369191+0000 mgr.x (mgr.14150) 263 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.374345+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.374345+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.375519+0000 mon.a (mon.0) 706 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.375519+0000 mon.a (mon.0) 706 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.376370+0000 mon.a (mon.0) 707 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.376370+0000 mon.a (mon.0) 707 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.377007+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.377007+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.381339+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.381339+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.474158+0000 mon.a (mon.0) 710 : audit [INF] from='client.? 192.168.123.103:0/2079746259' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6800/623478427"}]: dispatch 2026-03-09T14:22:54.510 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:54 vm05 bash[20070]: audit 2026-03-09T14:22:53.474158+0000 mon.a (mon.0) 710 : audit [INF] from='client.? 192.168.123.103:0/2079746259' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6800/623478427"}]: dispatch 2026-03-09T14:22:54.803 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[37744]: debug Successfully removed blocklist entry 2026-03-09T14:22:54.803 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:22:54 vm03 bash[37744]: debug Removing blocklisted entry for this host : 192.168.123.103:0/205210706 2026-03-09T14:22:55.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:55 vm04 bash[19581]: audit 2026-03-09T14:22:54.384787+0000 mon.a (mon.0) 711 : audit [INF] from='client.? 192.168.123.103:0/2079746259' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6800/623478427"}]': finished 2026-03-09T14:22:55.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:55 vm04 bash[19581]: audit 2026-03-09T14:22:54.384787+0000 mon.a (mon.0) 711 : audit [INF] from='client.? 192.168.123.103:0/2079746259' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6800/623478427"}]': finished 2026-03-09T14:22:55.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:55 vm04 bash[19581]: cluster 2026-03-09T14:22:54.389506+0000 mon.a (mon.0) 712 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-09T14:22:55.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:55 vm04 bash[19581]: cluster 2026-03-09T14:22:54.389506+0000 mon.a (mon.0) 712 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-09T14:22:55.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:55 vm04 bash[19581]: audit 2026-03-09T14:22:54.550260+0000 mon.a (mon.0) 713 : audit [INF] from='client.? 192.168.123.103:0/3024003061' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/205210706"}]: dispatch 2026-03-09T14:22:55.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:55 vm04 bash[19581]: audit 2026-03-09T14:22:54.550260+0000 mon.a (mon.0) 713 : audit [INF] from='client.? 192.168.123.103:0/3024003061' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/205210706"}]: dispatch 2026-03-09T14:22:55.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:55 vm05 bash[20070]: audit 2026-03-09T14:22:54.384787+0000 mon.a (mon.0) 711 : audit [INF] from='client.? 192.168.123.103:0/2079746259' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6800/623478427"}]': finished 2026-03-09T14:22:55.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:55 vm05 bash[20070]: audit 2026-03-09T14:22:54.384787+0000 mon.a (mon.0) 711 : audit [INF] from='client.? 192.168.123.103:0/2079746259' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6800/623478427"}]': finished 2026-03-09T14:22:55.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:55 vm05 bash[20070]: cluster 2026-03-09T14:22:54.389506+0000 mon.a (mon.0) 712 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-09T14:22:55.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:55 vm05 bash[20070]: cluster 2026-03-09T14:22:54.389506+0000 mon.a (mon.0) 712 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-09T14:22:55.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:55 vm05 bash[20070]: audit 2026-03-09T14:22:54.550260+0000 mon.a (mon.0) 713 : audit [INF] from='client.? 192.168.123.103:0/3024003061' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/205210706"}]: dispatch 2026-03-09T14:22:55.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:55 vm05 bash[20070]: audit 2026-03-09T14:22:54.550260+0000 mon.a (mon.0) 713 : audit [INF] from='client.? 192.168.123.103:0/3024003061' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/205210706"}]: dispatch 2026-03-09T14:22:55.803 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:22:55 vm03 bash[37744]: debug Successfully removed blocklist entry 2026-03-09T14:22:55.804 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:22:55 vm03 bash[37744]: debug Removing blocklisted entry for this host : 192.168.123.103:6800/2250683817 2026-03-09T14:22:55.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:55 vm03 bash[17524]: audit 2026-03-09T14:22:54.384787+0000 mon.a (mon.0) 711 : audit [INF] from='client.? 192.168.123.103:0/2079746259' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6800/623478427"}]': finished 2026-03-09T14:22:55.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:55 vm03 bash[17524]: audit 2026-03-09T14:22:54.384787+0000 mon.a (mon.0) 711 : audit [INF] from='client.? 192.168.123.103:0/2079746259' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6800/623478427"}]': finished 2026-03-09T14:22:55.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:55 vm03 bash[17524]: cluster 2026-03-09T14:22:54.389506+0000 mon.a (mon.0) 712 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-09T14:22:55.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:55 vm03 bash[17524]: cluster 2026-03-09T14:22:54.389506+0000 mon.a (mon.0) 712 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-09T14:22:55.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:55 vm03 bash[17524]: audit 2026-03-09T14:22:54.550260+0000 mon.a (mon.0) 713 : audit [INF] from='client.? 192.168.123.103:0/3024003061' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/205210706"}]: dispatch 2026-03-09T14:22:55.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:55 vm03 bash[17524]: audit 2026-03-09T14:22:54.550260+0000 mon.a (mon.0) 713 : audit [INF] from='client.? 192.168.123.103:0/3024003061' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/205210706"}]: dispatch 2026-03-09T14:22:56.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:56 vm04 bash[19581]: cluster 2026-03-09T14:22:54.948319+0000 mgr.x (mgr.14150) 264 : cluster [DBG] pgmap v230: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:22:56.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:56 vm04 bash[19581]: cluster 2026-03-09T14:22:54.948319+0000 mgr.x (mgr.14150) 264 : cluster [DBG] pgmap v230: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:22:56.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:56 vm04 bash[19581]: audit 2026-03-09T14:22:55.399881+0000 mon.a (mon.0) 714 : audit [INF] from='client.? 192.168.123.103:0/3024003061' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/205210706"}]': finished 2026-03-09T14:22:56.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:56 vm04 bash[19581]: audit 2026-03-09T14:22:55.399881+0000 mon.a (mon.0) 714 : audit [INF] from='client.? 192.168.123.103:0/3024003061' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/205210706"}]': finished 2026-03-09T14:22:56.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:56 vm04 bash[19581]: cluster 2026-03-09T14:22:55.403594+0000 mon.a (mon.0) 715 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-09T14:22:56.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:56 vm04 bash[19581]: cluster 2026-03-09T14:22:55.403594+0000 mon.a (mon.0) 715 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-09T14:22:56.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:56 vm04 bash[19581]: audit 2026-03-09T14:22:55.564651+0000 mon.a (mon.0) 716 : audit [INF] from='client.? 192.168.123.103:0/2422172736' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6800/2250683817"}]: dispatch 2026-03-09T14:22:56.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:56 vm04 bash[19581]: audit 2026-03-09T14:22:55.564651+0000 mon.a (mon.0) 716 : audit [INF] from='client.? 192.168.123.103:0/2422172736' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6800/2250683817"}]: dispatch 2026-03-09T14:22:56.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:56 vm05 bash[20070]: cluster 2026-03-09T14:22:54.948319+0000 mgr.x (mgr.14150) 264 : cluster [DBG] pgmap v230: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:22:56.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:56 vm05 bash[20070]: cluster 2026-03-09T14:22:54.948319+0000 mgr.x (mgr.14150) 264 : cluster [DBG] pgmap v230: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:22:56.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:56 vm05 bash[20070]: audit 2026-03-09T14:22:55.399881+0000 mon.a (mon.0) 714 : audit [INF] from='client.? 192.168.123.103:0/3024003061' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/205210706"}]': finished 2026-03-09T14:22:56.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:56 vm05 bash[20070]: audit 2026-03-09T14:22:55.399881+0000 mon.a (mon.0) 714 : audit [INF] from='client.? 192.168.123.103:0/3024003061' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/205210706"}]': finished 2026-03-09T14:22:56.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:56 vm05 bash[20070]: cluster 2026-03-09T14:22:55.403594+0000 mon.a (mon.0) 715 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-09T14:22:56.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:56 vm05 bash[20070]: cluster 2026-03-09T14:22:55.403594+0000 mon.a (mon.0) 715 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-09T14:22:56.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:56 vm05 bash[20070]: audit 2026-03-09T14:22:55.564651+0000 mon.a (mon.0) 716 : audit [INF] from='client.? 192.168.123.103:0/2422172736' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6800/2250683817"}]: dispatch 2026-03-09T14:22:56.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:56 vm05 bash[20070]: audit 2026-03-09T14:22:55.564651+0000 mon.a (mon.0) 716 : audit [INF] from='client.? 192.168.123.103:0/2422172736' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6800/2250683817"}]: dispatch 2026-03-09T14:22:56.803 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:22:56 vm03 bash[37744]: debug Successfully removed blocklist entry 2026-03-09T14:22:56.804 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:22:56 vm03 bash[37744]: debug Removing blocklisted entry for this host : 192.168.123.103:0/1825529667 2026-03-09T14:22:56.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:56 vm03 bash[17524]: cluster 2026-03-09T14:22:54.948319+0000 mgr.x (mgr.14150) 264 : cluster [DBG] pgmap v230: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:22:56.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:56 vm03 bash[17524]: cluster 2026-03-09T14:22:54.948319+0000 mgr.x (mgr.14150) 264 : cluster [DBG] pgmap v230: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:22:56.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:56 vm03 bash[17524]: audit 2026-03-09T14:22:55.399881+0000 mon.a (mon.0) 714 : audit [INF] from='client.? 192.168.123.103:0/3024003061' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/205210706"}]': finished 2026-03-09T14:22:56.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:56 vm03 bash[17524]: audit 2026-03-09T14:22:55.399881+0000 mon.a (mon.0) 714 : audit [INF] from='client.? 192.168.123.103:0/3024003061' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/205210706"}]': finished 2026-03-09T14:22:56.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:56 vm03 bash[17524]: cluster 2026-03-09T14:22:55.403594+0000 mon.a (mon.0) 715 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-09T14:22:56.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:56 vm03 bash[17524]: cluster 2026-03-09T14:22:55.403594+0000 mon.a (mon.0) 715 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-09T14:22:56.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:56 vm03 bash[17524]: audit 2026-03-09T14:22:55.564651+0000 mon.a (mon.0) 716 : audit [INF] from='client.? 192.168.123.103:0/2422172736' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6800/2250683817"}]: dispatch 2026-03-09T14:22:56.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:56 vm03 bash[17524]: audit 2026-03-09T14:22:55.564651+0000 mon.a (mon.0) 716 : audit [INF] from='client.? 192.168.123.103:0/2422172736' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6800/2250683817"}]: dispatch 2026-03-09T14:22:56.861 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.b/config 2026-03-09T14:22:57.154 INFO:teuthology.orchestra.run.vm04.stdout:[client.1] 2026-03-09T14:22:57.154 INFO:teuthology.orchestra.run.vm04.stdout: key = AQDB165pA4HqCBAALqInmi+H8PGowy4rB5/PlQ== 2026-03-09T14:22:57.209 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T14:22:57.209 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/ceph/ceph.client.1.keyring 2026-03-09T14:22:57.209 DEBUG:teuthology.orchestra.run.vm04:> sudo chmod 0644 /etc/ceph/ceph.client.1.keyring 2026-03-09T14:22:57.262 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph auth get-or-create client.2 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-09T14:22:57.423 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:57 vm04 bash[19581]: audit 2026-03-09T14:22:56.409271+0000 mon.a (mon.0) 717 : audit [INF] from='client.? 192.168.123.103:0/2422172736' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6800/2250683817"}]': finished 2026-03-09T14:22:57.423 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:57 vm04 bash[19581]: audit 2026-03-09T14:22:56.409271+0000 mon.a (mon.0) 717 : audit [INF] from='client.? 192.168.123.103:0/2422172736' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6800/2250683817"}]': finished 2026-03-09T14:22:57.423 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:57 vm04 bash[19581]: cluster 2026-03-09T14:22:56.410819+0000 mon.a (mon.0) 718 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-09T14:22:57.423 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:57 vm04 bash[19581]: cluster 2026-03-09T14:22:56.410819+0000 mon.a (mon.0) 718 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-09T14:22:57.423 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:57 vm04 bash[19581]: audit 2026-03-09T14:22:56.579124+0000 mon.c (mon.1) 22 : audit [INF] from='client.? 192.168.123.103:0/2034061116' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/1825529667"}]: dispatch 2026-03-09T14:22:57.423 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:57 vm04 bash[19581]: audit 2026-03-09T14:22:56.579124+0000 mon.c (mon.1) 22 : audit [INF] from='client.? 192.168.123.103:0/2034061116' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/1825529667"}]: dispatch 2026-03-09T14:22:57.423 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:57 vm04 bash[19581]: audit 2026-03-09T14:22:56.579623+0000 mon.a (mon.0) 719 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/1825529667"}]: dispatch 2026-03-09T14:22:57.423 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:57 vm04 bash[19581]: audit 2026-03-09T14:22:56.579623+0000 mon.a (mon.0) 719 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/1825529667"}]: dispatch 2026-03-09T14:22:57.423 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:57 vm04 bash[19581]: audit 2026-03-09T14:22:57.148595+0000 mon.b (mon.2) 19 : audit [INF] from='client.? 192.168.123.104:0/618892123' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:22:57.423 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:57 vm04 bash[19581]: audit 2026-03-09T14:22:57.148595+0000 mon.b (mon.2) 19 : audit [INF] from='client.? 192.168.123.104:0/618892123' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:22:57.423 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:57 vm04 bash[19581]: audit 2026-03-09T14:22:57.149463+0000 mon.a (mon.0) 720 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:22:57.423 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:57 vm04 bash[19581]: audit 2026-03-09T14:22:57.149463+0000 mon.a (mon.0) 720 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:22:57.423 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:57 vm04 bash[19581]: audit 2026-03-09T14:22:57.152073+0000 mon.a (mon.0) 721 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T14:22:57.423 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:57 vm04 bash[19581]: audit 2026-03-09T14:22:57.152073+0000 mon.a (mon.0) 721 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T14:22:57.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:57 vm05 bash[20070]: audit 2026-03-09T14:22:56.409271+0000 mon.a (mon.0) 717 : audit [INF] from='client.? 192.168.123.103:0/2422172736' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6800/2250683817"}]': finished 2026-03-09T14:22:57.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:57 vm05 bash[20070]: audit 2026-03-09T14:22:56.409271+0000 mon.a (mon.0) 717 : audit [INF] from='client.? 192.168.123.103:0/2422172736' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6800/2250683817"}]': finished 2026-03-09T14:22:57.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:57 vm05 bash[20070]: cluster 2026-03-09T14:22:56.410819+0000 mon.a (mon.0) 718 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-09T14:22:57.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:57 vm05 bash[20070]: cluster 2026-03-09T14:22:56.410819+0000 mon.a (mon.0) 718 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-09T14:22:57.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:57 vm05 bash[20070]: audit 2026-03-09T14:22:56.579124+0000 mon.c (mon.1) 22 : audit [INF] from='client.? 192.168.123.103:0/2034061116' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/1825529667"}]: dispatch 2026-03-09T14:22:57.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:57 vm05 bash[20070]: audit 2026-03-09T14:22:56.579124+0000 mon.c (mon.1) 22 : audit [INF] from='client.? 192.168.123.103:0/2034061116' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/1825529667"}]: dispatch 2026-03-09T14:22:57.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:57 vm05 bash[20070]: audit 2026-03-09T14:22:56.579623+0000 mon.a (mon.0) 719 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/1825529667"}]: dispatch 2026-03-09T14:22:57.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:57 vm05 bash[20070]: audit 2026-03-09T14:22:56.579623+0000 mon.a (mon.0) 719 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/1825529667"}]: dispatch 2026-03-09T14:22:57.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:57 vm05 bash[20070]: audit 2026-03-09T14:22:57.148595+0000 mon.b (mon.2) 19 : audit [INF] from='client.? 192.168.123.104:0/618892123' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:22:57.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:57 vm05 bash[20070]: audit 2026-03-09T14:22:57.148595+0000 mon.b (mon.2) 19 : audit [INF] from='client.? 192.168.123.104:0/618892123' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:22:57.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:57 vm05 bash[20070]: audit 2026-03-09T14:22:57.149463+0000 mon.a (mon.0) 720 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:22:57.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:57 vm05 bash[20070]: audit 2026-03-09T14:22:57.149463+0000 mon.a (mon.0) 720 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:22:57.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:57 vm05 bash[20070]: audit 2026-03-09T14:22:57.152073+0000 mon.a (mon.0) 721 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T14:22:57.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:57 vm05 bash[20070]: audit 2026-03-09T14:22:57.152073+0000 mon.a (mon.0) 721 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T14:22:57.803 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:22:57 vm03 bash[37744]: debug Successfully removed blocklist entry 2026-03-09T14:22:57.803 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:22:57 vm03 bash[37744]: debug Reading the configuration object to update local LIO configuration 2026-03-09T14:22:57.803 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:22:57 vm03 bash[37744]: debug Configuration does not have an entry for this host(vm03.local) - nothing to define to LIO 2026-03-09T14:22:57.803 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:22:57 vm03 bash[37744]: * Serving Flask app 'rbd-target-api' (lazy loading) 2026-03-09T14:22:57.803 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:22:57 vm03 bash[37744]: * Environment: production 2026-03-09T14:22:57.803 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:22:57 vm03 bash[37744]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-09T14:22:57.803 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:22:57 vm03 bash[37744]: Use a production WSGI server instead. 2026-03-09T14:22:57.803 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:22:57 vm03 bash[37744]: * Debug mode: off 2026-03-09T14:22:57.804 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:22:57 vm03 bash[37744]: debug * Running on all addresses. 2026-03-09T14:22:57.804 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:22:57 vm03 bash[37744]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-09T14:22:57.804 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:22:57 vm03 bash[37744]: * Running on all addresses. 2026-03-09T14:22:57.804 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:22:57 vm03 bash[37744]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-09T14:22:57.804 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:22:57 vm03 bash[37744]: debug * Running on http://[::1]:5000/ (Press CTRL+C to quit) 2026-03-09T14:22:57.804 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:22:57 vm03 bash[37744]: * Running on http://[::1]:5000/ (Press CTRL+C to quit) 2026-03-09T14:22:57.804 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:22:57 vm03 bash[37744]: debug there is no tcmu-runner data available 2026-03-09T14:22:57.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:57 vm03 bash[17524]: audit 2026-03-09T14:22:56.409271+0000 mon.a (mon.0) 717 : audit [INF] from='client.? 192.168.123.103:0/2422172736' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6800/2250683817"}]': finished 2026-03-09T14:22:57.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:57 vm03 bash[17524]: audit 2026-03-09T14:22:56.409271+0000 mon.a (mon.0) 717 : audit [INF] from='client.? 192.168.123.103:0/2422172736' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:6800/2250683817"}]': finished 2026-03-09T14:22:57.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:57 vm03 bash[17524]: cluster 2026-03-09T14:22:56.410819+0000 mon.a (mon.0) 718 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-09T14:22:57.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:57 vm03 bash[17524]: cluster 2026-03-09T14:22:56.410819+0000 mon.a (mon.0) 718 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-09T14:22:57.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:57 vm03 bash[17524]: audit 2026-03-09T14:22:56.579124+0000 mon.c (mon.1) 22 : audit [INF] from='client.? 192.168.123.103:0/2034061116' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/1825529667"}]: dispatch 2026-03-09T14:22:57.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:57 vm03 bash[17524]: audit 2026-03-09T14:22:56.579124+0000 mon.c (mon.1) 22 : audit [INF] from='client.? 192.168.123.103:0/2034061116' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/1825529667"}]: dispatch 2026-03-09T14:22:57.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:57 vm03 bash[17524]: audit 2026-03-09T14:22:56.579623+0000 mon.a (mon.0) 719 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/1825529667"}]: dispatch 2026-03-09T14:22:57.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:57 vm03 bash[17524]: audit 2026-03-09T14:22:56.579623+0000 mon.a (mon.0) 719 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/1825529667"}]: dispatch 2026-03-09T14:22:57.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:57 vm03 bash[17524]: audit 2026-03-09T14:22:57.148595+0000 mon.b (mon.2) 19 : audit [INF] from='client.? 192.168.123.104:0/618892123' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:22:57.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:57 vm03 bash[17524]: audit 2026-03-09T14:22:57.148595+0000 mon.b (mon.2) 19 : audit [INF] from='client.? 192.168.123.104:0/618892123' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:22:57.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:57 vm03 bash[17524]: audit 2026-03-09T14:22:57.149463+0000 mon.a (mon.0) 720 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:22:57.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:57 vm03 bash[17524]: audit 2026-03-09T14:22:57.149463+0000 mon.a (mon.0) 720 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:22:57.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:57 vm03 bash[17524]: audit 2026-03-09T14:22:57.152073+0000 mon.a (mon.0) 721 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T14:22:57.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:57 vm03 bash[17524]: audit 2026-03-09T14:22:57.152073+0000 mon.a (mon.0) 721 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T14:22:58.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:58 vm04 bash[19581]: cluster 2026-03-09T14:22:56.948600+0000 mgr.x (mgr.14150) 265 : cluster [DBG] pgmap v233: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:22:58.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:58 vm04 bash[19581]: cluster 2026-03-09T14:22:56.948600+0000 mgr.x (mgr.14150) 265 : cluster [DBG] pgmap v233: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:22:58.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:58 vm04 bash[19581]: audit 2026-03-09T14:22:57.417887+0000 mon.a (mon.0) 722 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/1825529667"}]': finished 2026-03-09T14:22:58.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:58 vm04 bash[19581]: audit 2026-03-09T14:22:57.417887+0000 mon.a (mon.0) 722 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/1825529667"}]': finished 2026-03-09T14:22:58.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:58 vm04 bash[19581]: cluster 2026-03-09T14:22:57.423690+0000 mon.a (mon.0) 723 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-09T14:22:58.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:58 vm04 bash[19581]: cluster 2026-03-09T14:22:57.423690+0000 mon.a (mon.0) 723 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-09T14:22:58.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:58 vm03 bash[17524]: cluster 2026-03-09T14:22:56.948600+0000 mgr.x (mgr.14150) 265 : cluster [DBG] pgmap v233: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:22:58.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:58 vm03 bash[17524]: cluster 2026-03-09T14:22:56.948600+0000 mgr.x (mgr.14150) 265 : cluster [DBG] pgmap v233: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:22:58.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:58 vm03 bash[17524]: audit 2026-03-09T14:22:57.417887+0000 mon.a (mon.0) 722 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/1825529667"}]': finished 2026-03-09T14:22:58.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:58 vm03 bash[17524]: audit 2026-03-09T14:22:57.417887+0000 mon.a (mon.0) 722 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/1825529667"}]': finished 2026-03-09T14:22:58.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:58 vm03 bash[17524]: cluster 2026-03-09T14:22:57.423690+0000 mon.a (mon.0) 723 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-09T14:22:58.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:58 vm03 bash[17524]: cluster 2026-03-09T14:22:57.423690+0000 mon.a (mon.0) 723 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-09T14:22:59.008 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:22:58 vm05 bash[38699]: debug there is no tcmu-runner data available 2026-03-09T14:22:59.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:58 vm05 bash[20070]: cluster 2026-03-09T14:22:56.948600+0000 mgr.x (mgr.14150) 265 : cluster [DBG] pgmap v233: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:22:59.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:58 vm05 bash[20070]: cluster 2026-03-09T14:22:56.948600+0000 mgr.x (mgr.14150) 265 : cluster [DBG] pgmap v233: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:22:59.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:58 vm05 bash[20070]: audit 2026-03-09T14:22:57.417887+0000 mon.a (mon.0) 722 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/1825529667"}]': finished 2026-03-09T14:22:59.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:58 vm05 bash[20070]: audit 2026-03-09T14:22:57.417887+0000 mon.a (mon.0) 722 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.103:0/1825529667"}]': finished 2026-03-09T14:22:59.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:58 vm05 bash[20070]: cluster 2026-03-09T14:22:57.423690+0000 mon.a (mon.0) 723 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-09T14:22:59.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:58 vm05 bash[20070]: cluster 2026-03-09T14:22:57.423690+0000 mon.a (mon.0) 723 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-09T14:22:59.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:59 vm04 bash[19581]: audit 2026-03-09T14:22:57.637704+0000 mgr.x (mgr.14150) 266 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:22:59.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:22:59 vm04 bash[19581]: audit 2026-03-09T14:22:57.637704+0000 mgr.x (mgr.14150) 266 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:22:59.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:59 vm05 bash[20070]: audit 2026-03-09T14:22:57.637704+0000 mgr.x (mgr.14150) 266 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:22:59.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:22:59 vm05 bash[20070]: audit 2026-03-09T14:22:57.637704+0000 mgr.x (mgr.14150) 266 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:22:59.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:59 vm03 bash[17524]: audit 2026-03-09T14:22:57.637704+0000 mgr.x (mgr.14150) 266 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:22:59.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:22:59 vm03 bash[17524]: audit 2026-03-09T14:22:57.637704+0000 mgr.x (mgr.14150) 266 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:00.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:00 vm04 bash[19581]: audit 2026-03-09T14:22:58.615949+0000 mgr.x (mgr.14150) 267 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:00.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:00 vm04 bash[19581]: audit 2026-03-09T14:22:58.615949+0000 mgr.x (mgr.14150) 267 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:00.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:00 vm04 bash[19581]: cluster 2026-03-09T14:22:58.948872+0000 mgr.x (mgr.14150) 268 : cluster [DBG] pgmap v235: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:23:00.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:00 vm04 bash[19581]: cluster 2026-03-09T14:22:58.948872+0000 mgr.x (mgr.14150) 268 : cluster [DBG] pgmap v235: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:23:00.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:00 vm05 bash[20070]: audit 2026-03-09T14:22:58.615949+0000 mgr.x (mgr.14150) 267 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:00.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:00 vm05 bash[20070]: audit 2026-03-09T14:22:58.615949+0000 mgr.x (mgr.14150) 267 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:00.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:00 vm05 bash[20070]: cluster 2026-03-09T14:22:58.948872+0000 mgr.x (mgr.14150) 268 : cluster [DBG] pgmap v235: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:23:00.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:00 vm05 bash[20070]: cluster 2026-03-09T14:22:58.948872+0000 mgr.x (mgr.14150) 268 : cluster [DBG] pgmap v235: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:23:00.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:00 vm03 bash[17524]: audit 2026-03-09T14:22:58.615949+0000 mgr.x (mgr.14150) 267 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:00.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:00 vm03 bash[17524]: audit 2026-03-09T14:22:58.615949+0000 mgr.x (mgr.14150) 267 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:00.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:00 vm03 bash[17524]: cluster 2026-03-09T14:22:58.948872+0000 mgr.x (mgr.14150) 268 : cluster [DBG] pgmap v235: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:23:00.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:00 vm03 bash[17524]: cluster 2026-03-09T14:22:58.948872+0000 mgr.x (mgr.14150) 268 : cluster [DBG] pgmap v235: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:23:01.885 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.c/config 2026-03-09T14:23:02.188 INFO:teuthology.orchestra.run.vm05.stdout:[client.2] 2026-03-09T14:23:02.188 INFO:teuthology.orchestra.run.vm05.stdout: key = AQDG165pXyDiChAAOcjtv5zZC5fHOmfxe15BXA== 2026-03-09T14:23:02.241 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T14:23:02.241 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/ceph/ceph.client.2.keyring 2026-03-09T14:23:02.241 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod 0644 /etc/ceph/ceph.client.2.keyring 2026-03-09T14:23:02.252 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-09T14:23:02.252 INFO:tasks.cephadm.ceph_manager.ceph:waiting for mgr available 2026-03-09T14:23:02.252 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph mgr dump --format=json 2026-03-09T14:23:02.513 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:02 vm03 bash[17524]: cluster 2026-03-09T14:23:00.949173+0000 mgr.x (mgr.14150) 269 : cluster [DBG] pgmap v236: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:02.513 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:02 vm03 bash[17524]: cluster 2026-03-09T14:23:00.949173+0000 mgr.x (mgr.14150) 269 : cluster [DBG] pgmap v236: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:02.513 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:02 vm03 bash[17524]: audit 2026-03-09T14:23:02.181703+0000 mon.b (mon.2) 20 : audit [INF] from='client.? 192.168.123.105:0/1973326153' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.2", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:23:02.513 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:02 vm03 bash[17524]: audit 2026-03-09T14:23:02.181703+0000 mon.b (mon.2) 20 : audit [INF] from='client.? 192.168.123.105:0/1973326153' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.2", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:23:02.513 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:02 vm03 bash[17524]: audit 2026-03-09T14:23:02.182495+0000 mon.a (mon.0) 724 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.2", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:23:02.513 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:02 vm03 bash[17524]: audit 2026-03-09T14:23:02.182495+0000 mon.a (mon.0) 724 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.2", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:23:02.513 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:02 vm03 bash[17524]: audit 2026-03-09T14:23:02.185168+0000 mon.a (mon.0) 725 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.2", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T14:23:02.513 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:02 vm03 bash[17524]: audit 2026-03-09T14:23:02.185168+0000 mon.a (mon.0) 725 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.2", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T14:23:02.897 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:02 vm05 bash[20070]: cluster 2026-03-09T14:23:00.949173+0000 mgr.x (mgr.14150) 269 : cluster [DBG] pgmap v236: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:02.897 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:02 vm05 bash[20070]: cluster 2026-03-09T14:23:00.949173+0000 mgr.x (mgr.14150) 269 : cluster [DBG] pgmap v236: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:02.897 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:02 vm05 bash[20070]: audit 2026-03-09T14:23:02.181703+0000 mon.b (mon.2) 20 : audit [INF] from='client.? 192.168.123.105:0/1973326153' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.2", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:23:02.897 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:02 vm05 bash[20070]: audit 2026-03-09T14:23:02.181703+0000 mon.b (mon.2) 20 : audit [INF] from='client.? 192.168.123.105:0/1973326153' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.2", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:23:02.897 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:02 vm05 bash[20070]: audit 2026-03-09T14:23:02.182495+0000 mon.a (mon.0) 724 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.2", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:23:02.897 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:02 vm05 bash[20070]: audit 2026-03-09T14:23:02.182495+0000 mon.a (mon.0) 724 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.2", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:23:02.897 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:02 vm05 bash[20070]: audit 2026-03-09T14:23:02.185168+0000 mon.a (mon.0) 725 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.2", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T14:23:02.897 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:02 vm05 bash[20070]: audit 2026-03-09T14:23:02.185168+0000 mon.a (mon.0) 725 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.2", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T14:23:03.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:02 vm04 bash[19581]: cluster 2026-03-09T14:23:00.949173+0000 mgr.x (mgr.14150) 269 : cluster [DBG] pgmap v236: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:03.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:02 vm04 bash[19581]: cluster 2026-03-09T14:23:00.949173+0000 mgr.x (mgr.14150) 269 : cluster [DBG] pgmap v236: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:03.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:02 vm04 bash[19581]: audit 2026-03-09T14:23:02.181703+0000 mon.b (mon.2) 20 : audit [INF] from='client.? 192.168.123.105:0/1973326153' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.2", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:23:03.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:02 vm04 bash[19581]: audit 2026-03-09T14:23:02.181703+0000 mon.b (mon.2) 20 : audit [INF] from='client.? 192.168.123.105:0/1973326153' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.2", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:23:03.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:02 vm04 bash[19581]: audit 2026-03-09T14:23:02.182495+0000 mon.a (mon.0) 724 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.2", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:23:03.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:02 vm04 bash[19581]: audit 2026-03-09T14:23:02.182495+0000 mon.a (mon.0) 724 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.2", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:23:03.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:02 vm04 bash[19581]: audit 2026-03-09T14:23:02.185168+0000 mon.a (mon.0) 725 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.2", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T14:23:03.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:02 vm04 bash[19581]: audit 2026-03-09T14:23:02.185168+0000 mon.a (mon.0) 725 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.2", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T14:23:04.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:04 vm03 bash[17524]: cluster 2026-03-09T14:23:02.949459+0000 mgr.x (mgr.14150) 270 : cluster [DBG] pgmap v237: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T14:23:04.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:04 vm03 bash[17524]: cluster 2026-03-09T14:23:02.949459+0000 mgr.x (mgr.14150) 270 : cluster [DBG] pgmap v237: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T14:23:05.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:04 vm04 bash[19581]: cluster 2026-03-09T14:23:02.949459+0000 mgr.x (mgr.14150) 270 : cluster [DBG] pgmap v237: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T14:23:05.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:04 vm04 bash[19581]: cluster 2026-03-09T14:23:02.949459+0000 mgr.x (mgr.14150) 270 : cluster [DBG] pgmap v237: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T14:23:05.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:04 vm05 bash[20070]: cluster 2026-03-09T14:23:02.949459+0000 mgr.x (mgr.14150) 270 : cluster [DBG] pgmap v237: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T14:23:05.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:04 vm05 bash[20070]: cluster 2026-03-09T14:23:02.949459+0000 mgr.x (mgr.14150) 270 : cluster [DBG] pgmap v237: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T14:23:06.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:06 vm03 bash[17524]: cluster 2026-03-09T14:23:04.949819+0000 mgr.x (mgr.14150) 271 : cluster [DBG] pgmap v238: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.3 KiB/s rd, 2 op/s 2026-03-09T14:23:06.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:06 vm03 bash[17524]: cluster 2026-03-09T14:23:04.949819+0000 mgr.x (mgr.14150) 271 : cluster [DBG] pgmap v238: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.3 KiB/s rd, 2 op/s 2026-03-09T14:23:06.872 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:23:07.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:06 vm04 bash[19581]: cluster 2026-03-09T14:23:04.949819+0000 mgr.x (mgr.14150) 271 : cluster [DBG] pgmap v238: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.3 KiB/s rd, 2 op/s 2026-03-09T14:23:07.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:06 vm04 bash[19581]: cluster 2026-03-09T14:23:04.949819+0000 mgr.x (mgr.14150) 271 : cluster [DBG] pgmap v238: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.3 KiB/s rd, 2 op/s 2026-03-09T14:23:07.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:06 vm05 bash[20070]: cluster 2026-03-09T14:23:04.949819+0000 mgr.x (mgr.14150) 271 : cluster [DBG] pgmap v238: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.3 KiB/s rd, 2 op/s 2026-03-09T14:23:07.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:06 vm05 bash[20070]: cluster 2026-03-09T14:23:04.949819+0000 mgr.x (mgr.14150) 271 : cluster [DBG] pgmap v238: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.3 KiB/s rd, 2 op/s 2026-03-09T14:23:07.141 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:23:07.199 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":14,"flags":0,"active_gid":14150,"active_name":"x","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6800","nonce":1749654064},{"type":"v1","addr":"192.168.123.103:6801","nonce":1749654064}]},"active_addr":"192.168.123.103:6801/1749654064","active_change":"2026-03-09T14:16:58.893053+0000","active_mgr_features":4540701547738038271,"available":true,"standbys":[],"modules":["cephadm","dashboard","iostat","nfs","restful"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{"dashboard":"https://192.168.123.103:8443/"},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"squid":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"force_disabled_modules":{},"last_failure_osd_epoch":3,"active_clients":[{"name":"cephadm","addrvec":[{"type":"v2","addr":"192.168.123.103:0","nonce":4133158150}]},{"name":"devicehealth","addrvec":[{"type":"v2","addr":"192.168.123.103:0","nonce":3033634307}]},{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.103:0","nonce":308481536}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.103:0","nonce":432547363}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.103:0","nonce":2723734801}]}]} 2026-03-09T14:23:07.200 INFO:tasks.cephadm.ceph_manager.ceph:mgr available! 2026-03-09T14:23:07.200 INFO:tasks.cephadm.ceph_manager.ceph:waiting for all up 2026-03-09T14:23:07.200 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph osd dump --format=json 2026-03-09T14:23:07.804 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:23:07 vm03 bash[37744]: debug there is no tcmu-runner data available 2026-03-09T14:23:07.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:07 vm03 bash[17524]: audit 2026-03-09T14:23:07.140109+0000 mon.c (mon.1) 23 : audit [DBG] from='client.? 192.168.123.103:0/3590167992' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T14:23:07.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:07 vm03 bash[17524]: audit 2026-03-09T14:23:07.140109+0000 mon.c (mon.1) 23 : audit [DBG] from='client.? 192.168.123.103:0/3590167992' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T14:23:08.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:07 vm04 bash[19581]: audit 2026-03-09T14:23:07.140109+0000 mon.c (mon.1) 23 : audit [DBG] from='client.? 192.168.123.103:0/3590167992' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T14:23:08.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:07 vm04 bash[19581]: audit 2026-03-09T14:23:07.140109+0000 mon.c (mon.1) 23 : audit [DBG] from='client.? 192.168.123.103:0/3590167992' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T14:23:08.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:07 vm05 bash[20070]: audit 2026-03-09T14:23:07.140109+0000 mon.c (mon.1) 23 : audit [DBG] from='client.? 192.168.123.103:0/3590167992' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T14:23:08.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:07 vm05 bash[20070]: audit 2026-03-09T14:23:07.140109+0000 mon.c (mon.1) 23 : audit [DBG] from='client.? 192.168.123.103:0/3590167992' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T14:23:08.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:08 vm03 bash[17524]: cluster 2026-03-09T14:23:06.950128+0000 mgr.x (mgr.14150) 272 : cluster [DBG] pgmap v239: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.0 KiB/s rd, 1 op/s 2026-03-09T14:23:08.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:08 vm03 bash[17524]: cluster 2026-03-09T14:23:06.950128+0000 mgr.x (mgr.14150) 272 : cluster [DBG] pgmap v239: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.0 KiB/s rd, 1 op/s 2026-03-09T14:23:09.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:08 vm04 bash[19581]: cluster 2026-03-09T14:23:06.950128+0000 mgr.x (mgr.14150) 272 : cluster [DBG] pgmap v239: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.0 KiB/s rd, 1 op/s 2026-03-09T14:23:09.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:08 vm04 bash[19581]: cluster 2026-03-09T14:23:06.950128+0000 mgr.x (mgr.14150) 272 : cluster [DBG] pgmap v239: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.0 KiB/s rd, 1 op/s 2026-03-09T14:23:09.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:08 vm05 bash[20070]: cluster 2026-03-09T14:23:06.950128+0000 mgr.x (mgr.14150) 272 : cluster [DBG] pgmap v239: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.0 KiB/s rd, 1 op/s 2026-03-09T14:23:09.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:08 vm05 bash[20070]: cluster 2026-03-09T14:23:06.950128+0000 mgr.x (mgr.14150) 272 : cluster [DBG] pgmap v239: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.0 KiB/s rd, 1 op/s 2026-03-09T14:23:09.008 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:23:08 vm05 bash[38699]: debug there is no tcmu-runner data available 2026-03-09T14:23:10.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:09 vm04 bash[19581]: audit 2026-03-09T14:23:07.645740+0000 mgr.x (mgr.14150) 273 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:10.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:09 vm04 bash[19581]: audit 2026-03-09T14:23:07.645740+0000 mgr.x (mgr.14150) 273 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:10.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:09 vm05 bash[20070]: audit 2026-03-09T14:23:07.645740+0000 mgr.x (mgr.14150) 273 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:10.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:09 vm05 bash[20070]: audit 2026-03-09T14:23:07.645740+0000 mgr.x (mgr.14150) 273 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:10.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:09 vm03 bash[17524]: audit 2026-03-09T14:23:07.645740+0000 mgr.x (mgr.14150) 273 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:10.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:09 vm03 bash[17524]: audit 2026-03-09T14:23:07.645740+0000 mgr.x (mgr.14150) 273 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:10.890 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:23:10.905 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:10 vm03 bash[17524]: audit 2026-03-09T14:23:08.619213+0000 mgr.x (mgr.14150) 274 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:10.905 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:10 vm03 bash[17524]: audit 2026-03-09T14:23:08.619213+0000 mgr.x (mgr.14150) 274 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:10.905 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:10 vm03 bash[17524]: cluster 2026-03-09T14:23:08.950379+0000 mgr.x (mgr.14150) 275 : cluster [DBG] pgmap v240: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:10.905 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:10 vm03 bash[17524]: cluster 2026-03-09T14:23:08.950379+0000 mgr.x (mgr.14150) 275 : cluster [DBG] pgmap v240: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:11.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:10 vm04 bash[19581]: audit 2026-03-09T14:23:08.619213+0000 mgr.x (mgr.14150) 274 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:11.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:10 vm04 bash[19581]: audit 2026-03-09T14:23:08.619213+0000 mgr.x (mgr.14150) 274 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:11.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:10 vm04 bash[19581]: cluster 2026-03-09T14:23:08.950379+0000 mgr.x (mgr.14150) 275 : cluster [DBG] pgmap v240: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:11.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:10 vm04 bash[19581]: cluster 2026-03-09T14:23:08.950379+0000 mgr.x (mgr.14150) 275 : cluster [DBG] pgmap v240: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:11.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:10 vm05 bash[20070]: audit 2026-03-09T14:23:08.619213+0000 mgr.x (mgr.14150) 274 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:11.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:10 vm05 bash[20070]: audit 2026-03-09T14:23:08.619213+0000 mgr.x (mgr.14150) 274 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:11.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:10 vm05 bash[20070]: cluster 2026-03-09T14:23:08.950379+0000 mgr.x (mgr.14150) 275 : cluster [DBG] pgmap v240: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:11.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:10 vm05 bash[20070]: cluster 2026-03-09T14:23:08.950379+0000 mgr.x (mgr.14150) 275 : cluster [DBG] pgmap v240: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:11.140 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:23:11.140 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":65,"fsid":"3346de4a-1bc2-11f1-95ae-3796c8433614","created":"2026-03-09T14:16:38.209278+0000","modified":"2026-03-09T14:22:57.410554+0000","last_up_change":"2026-03-09T14:22:14.299826+0000","last_in_change":"2026-03-09T14:21:57.783293+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":2,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-09T14:19:36.973181+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"21","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":"datapool","create_time":"2026-03-09T14:22:33.077864+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":3,"pg_placement_num":3,"pg_placement_num_target":3,"pg_num_target":3,"pg_num_pending":3,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"55","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":55,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":5.3299999237060547,"score_stable":5.3299999237060547,"optimal_score":0.75,"raw_score_acting":4,"raw_score_stable":4,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"6f17c91b-de65-4e8c-9e74-a512b4d9d1c9","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":25,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6802","nonce":1075788976},{"type":"v1","addr":"192.168.123.103:6803","nonce":1075788976}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6804","nonce":1075788976},{"type":"v1","addr":"192.168.123.103:6805","nonce":1075788976}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6808","nonce":1075788976},{"type":"v1","addr":"192.168.123.103:6809","nonce":1075788976}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6806","nonce":1075788976},{"type":"v1","addr":"192.168.123.103:6807","nonce":1075788976}]},"public_addr":"192.168.123.103:6803/1075788976","cluster_addr":"192.168.123.103:6805/1075788976","heartbeat_back_addr":"192.168.123.103:6809/1075788976","heartbeat_front_addr":"192.168.123.103:6807/1075788976","state":["exists","up"]},{"osd":1,"uuid":"0ee8add4-d132-4666-b7ad-a8416c3c05bf","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6810","nonce":2015646488},{"type":"v1","addr":"192.168.123.103:6811","nonce":2015646488}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6812","nonce":2015646488},{"type":"v1","addr":"192.168.123.103:6813","nonce":2015646488}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6816","nonce":2015646488},{"type":"v1","addr":"192.168.123.103:6817","nonce":2015646488}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6814","nonce":2015646488},{"type":"v1","addr":"192.168.123.103:6815","nonce":2015646488}]},"public_addr":"192.168.123.103:6811/2015646488","cluster_addr":"192.168.123.103:6813/2015646488","heartbeat_back_addr":"192.168.123.103:6817/2015646488","heartbeat_front_addr":"192.168.123.103:6815/2015646488","state":["exists","up"]},{"osd":2,"uuid":"f76cddf6-4356-443b-8d69-5d0e6d8a3803","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":50,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6800","nonce":1899064825},{"type":"v1","addr":"192.168.123.104:6801","nonce":1899064825}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6802","nonce":1899064825},{"type":"v1","addr":"192.168.123.104:6803","nonce":1899064825}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6806","nonce":1899064825},{"type":"v1","addr":"192.168.123.104:6807","nonce":1899064825}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6804","nonce":1899064825},{"type":"v1","addr":"192.168.123.104:6805","nonce":1899064825}]},"public_addr":"192.168.123.104:6801/1899064825","cluster_addr":"192.168.123.104:6803/1899064825","heartbeat_back_addr":"192.168.123.104:6807/1899064825","heartbeat_front_addr":"192.168.123.104:6805/1899064825","state":["exists","up"]},{"osd":3,"uuid":"d1d9774a-a921-4ff4-9d67-c8545864b268","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":25,"up_thru":50,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6808","nonce":1600567220},{"type":"v1","addr":"192.168.123.104:6809","nonce":1600567220}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6810","nonce":1600567220},{"type":"v1","addr":"192.168.123.104:6811","nonce":1600567220}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6814","nonce":1600567220},{"type":"v1","addr":"192.168.123.104:6815","nonce":1600567220}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6812","nonce":1600567220},{"type":"v1","addr":"192.168.123.104:6813","nonce":1600567220}]},"public_addr":"192.168.123.104:6809/1600567220","cluster_addr":"192.168.123.104:6811/1600567220","heartbeat_back_addr":"192.168.123.104:6815/1600567220","heartbeat_front_addr":"192.168.123.104:6813/1600567220","state":["exists","up"]},{"osd":4,"uuid":"97a3c763-32a2-413f-8d3f-0e7163f512ed","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":31,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6816","nonce":3814952582},{"type":"v1","addr":"192.168.123.104:6817","nonce":3814952582}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6818","nonce":3814952582},{"type":"v1","addr":"192.168.123.104:6819","nonce":3814952582}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6822","nonce":3814952582},{"type":"v1","addr":"192.168.123.104:6823","nonce":3814952582}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6820","nonce":3814952582},{"type":"v1","addr":"192.168.123.104:6821","nonce":3814952582}]},"public_addr":"192.168.123.104:6817/3814952582","cluster_addr":"192.168.123.104:6819/3814952582","heartbeat_back_addr":"192.168.123.104:6823/3814952582","heartbeat_front_addr":"192.168.123.104:6821/3814952582","state":["exists","up"]},{"osd":5,"uuid":"628905a2-37b8-4495-89ad-022957204832","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":37,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6800","nonce":67591369},{"type":"v1","addr":"192.168.123.105:6801","nonce":67591369}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6802","nonce":67591369},{"type":"v1","addr":"192.168.123.105:6803","nonce":67591369}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6806","nonce":67591369},{"type":"v1","addr":"192.168.123.105:6807","nonce":67591369}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6804","nonce":67591369},{"type":"v1","addr":"192.168.123.105:6805","nonce":67591369}]},"public_addr":"192.168.123.105:6801/67591369","cluster_addr":"192.168.123.105:6803/67591369","heartbeat_back_addr":"192.168.123.105:6807/67591369","heartbeat_front_addr":"192.168.123.105:6805/67591369","state":["exists","up"]},{"osd":6,"uuid":"bf677cce-a472-46ab-9a91-492f3b2e689b","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":43,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6808","nonce":1573740685},{"type":"v1","addr":"192.168.123.105:6809","nonce":1573740685}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6810","nonce":1573740685},{"type":"v1","addr":"192.168.123.105:6811","nonce":1573740685}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6814","nonce":1573740685},{"type":"v1","addr":"192.168.123.105:6815","nonce":1573740685}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6812","nonce":1573740685},{"type":"v1","addr":"192.168.123.105:6813","nonce":1573740685}]},"public_addr":"192.168.123.105:6809/1573740685","cluster_addr":"192.168.123.105:6811/1573740685","heartbeat_back_addr":"192.168.123.105:6815/1573740685","heartbeat_front_addr":"192.168.123.105:6813/1573740685","state":["exists","up"]},{"osd":7,"uuid":"377ff461-7194-48e3-8093-29ef296bd4de","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":48,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6816","nonce":3385956239},{"type":"v1","addr":"192.168.123.105:6817","nonce":3385956239}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6818","nonce":3385956239},{"type":"v1","addr":"192.168.123.105:6819","nonce":3385956239}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6822","nonce":3385956239},{"type":"v1","addr":"192.168.123.105:6823","nonce":3385956239}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6820","nonce":3385956239},{"type":"v1","addr":"192.168.123.105:6821","nonce":3385956239}]},"public_addr":"192.168.123.105:6817/3385956239","cluster_addr":"192.168.123.105:6819/3385956239","heartbeat_back_addr":"192.168.123.105:6823/3385956239","heartbeat_front_addr":"192.168.123.105:6821/3385956239","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:18:31.521402+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:19:04.025881+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:19:34.043331+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:20:06.342628+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:20:38.397649+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:21:08.079874+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:21:40.198080+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:22:12.612686+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T14:23:11.202 INFO:tasks.cephadm.ceph_manager.ceph:all up! 2026-03-09T14:23:11.202 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph osd dump --format=json 2026-03-09T14:23:11.903 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:11 vm03 bash[17524]: cluster 2026-03-09T14:23:10.950647+0000 mgr.x (mgr.14150) 276 : cluster [DBG] pgmap v241: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:11.903 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:11 vm03 bash[17524]: cluster 2026-03-09T14:23:10.950647+0000 mgr.x (mgr.14150) 276 : cluster [DBG] pgmap v241: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:11.903 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:11 vm03 bash[17524]: audit 2026-03-09T14:23:11.139719+0000 mon.a (mon.0) 726 : audit [DBG] from='client.? 192.168.123.103:0/781692820' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:23:11.903 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:11 vm03 bash[17524]: audit 2026-03-09T14:23:11.139719+0000 mon.a (mon.0) 726 : audit [DBG] from='client.? 192.168.123.103:0/781692820' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:23:12.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:11 vm04 bash[19581]: cluster 2026-03-09T14:23:10.950647+0000 mgr.x (mgr.14150) 276 : cluster [DBG] pgmap v241: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:12.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:11 vm04 bash[19581]: cluster 2026-03-09T14:23:10.950647+0000 mgr.x (mgr.14150) 276 : cluster [DBG] pgmap v241: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:12.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:11 vm04 bash[19581]: audit 2026-03-09T14:23:11.139719+0000 mon.a (mon.0) 726 : audit [DBG] from='client.? 192.168.123.103:0/781692820' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:23:12.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:11 vm04 bash[19581]: audit 2026-03-09T14:23:11.139719+0000 mon.a (mon.0) 726 : audit [DBG] from='client.? 192.168.123.103:0/781692820' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:23:12.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:11 vm05 bash[20070]: cluster 2026-03-09T14:23:10.950647+0000 mgr.x (mgr.14150) 276 : cluster [DBG] pgmap v241: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:12.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:11 vm05 bash[20070]: cluster 2026-03-09T14:23:10.950647+0000 mgr.x (mgr.14150) 276 : cluster [DBG] pgmap v241: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:12.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:11 vm05 bash[20070]: audit 2026-03-09T14:23:11.139719+0000 mon.a (mon.0) 726 : audit [DBG] from='client.? 192.168.123.103:0/781692820' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:23:12.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:11 vm05 bash[20070]: audit 2026-03-09T14:23:11.139719+0000 mon.a (mon.0) 726 : audit [DBG] from='client.? 192.168.123.103:0/781692820' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:23:14.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:14 vm04 bash[19581]: cluster 2026-03-09T14:23:12.950917+0000 mgr.x (mgr.14150) 277 : cluster [DBG] pgmap v242: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:14.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:14 vm04 bash[19581]: cluster 2026-03-09T14:23:12.950917+0000 mgr.x (mgr.14150) 277 : cluster [DBG] pgmap v242: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:14.258 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:14 vm05 bash[20070]: cluster 2026-03-09T14:23:12.950917+0000 mgr.x (mgr.14150) 277 : cluster [DBG] pgmap v242: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:14.258 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:14 vm05 bash[20070]: cluster 2026-03-09T14:23:12.950917+0000 mgr.x (mgr.14150) 277 : cluster [DBG] pgmap v242: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:14.303 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:14 vm03 bash[17524]: cluster 2026-03-09T14:23:12.950917+0000 mgr.x (mgr.14150) 277 : cluster [DBG] pgmap v242: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:14.303 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:14 vm03 bash[17524]: cluster 2026-03-09T14:23:12.950917+0000 mgr.x (mgr.14150) 277 : cluster [DBG] pgmap v242: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:14.903 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:23:15.167 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:23:15.167 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":65,"fsid":"3346de4a-1bc2-11f1-95ae-3796c8433614","created":"2026-03-09T14:16:38.209278+0000","modified":"2026-03-09T14:22:57.410554+0000","last_up_change":"2026-03-09T14:22:14.299826+0000","last_in_change":"2026-03-09T14:21:57.783293+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":2,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-09T14:19:36.973181+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"21","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":"datapool","create_time":"2026-03-09T14:22:33.077864+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":3,"pg_placement_num":3,"pg_placement_num_target":3,"pg_num_target":3,"pg_num_pending":3,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"55","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":55,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":5.3299999237060547,"score_stable":5.3299999237060547,"optimal_score":0.75,"raw_score_acting":4,"raw_score_stable":4,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"6f17c91b-de65-4e8c-9e74-a512b4d9d1c9","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":25,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6802","nonce":1075788976},{"type":"v1","addr":"192.168.123.103:6803","nonce":1075788976}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6804","nonce":1075788976},{"type":"v1","addr":"192.168.123.103:6805","nonce":1075788976}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6808","nonce":1075788976},{"type":"v1","addr":"192.168.123.103:6809","nonce":1075788976}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6806","nonce":1075788976},{"type":"v1","addr":"192.168.123.103:6807","nonce":1075788976}]},"public_addr":"192.168.123.103:6803/1075788976","cluster_addr":"192.168.123.103:6805/1075788976","heartbeat_back_addr":"192.168.123.103:6809/1075788976","heartbeat_front_addr":"192.168.123.103:6807/1075788976","state":["exists","up"]},{"osd":1,"uuid":"0ee8add4-d132-4666-b7ad-a8416c3c05bf","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6810","nonce":2015646488},{"type":"v1","addr":"192.168.123.103:6811","nonce":2015646488}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6812","nonce":2015646488},{"type":"v1","addr":"192.168.123.103:6813","nonce":2015646488}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6816","nonce":2015646488},{"type":"v1","addr":"192.168.123.103:6817","nonce":2015646488}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6814","nonce":2015646488},{"type":"v1","addr":"192.168.123.103:6815","nonce":2015646488}]},"public_addr":"192.168.123.103:6811/2015646488","cluster_addr":"192.168.123.103:6813/2015646488","heartbeat_back_addr":"192.168.123.103:6817/2015646488","heartbeat_front_addr":"192.168.123.103:6815/2015646488","state":["exists","up"]},{"osd":2,"uuid":"f76cddf6-4356-443b-8d69-5d0e6d8a3803","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":50,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6800","nonce":1899064825},{"type":"v1","addr":"192.168.123.104:6801","nonce":1899064825}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6802","nonce":1899064825},{"type":"v1","addr":"192.168.123.104:6803","nonce":1899064825}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6806","nonce":1899064825},{"type":"v1","addr":"192.168.123.104:6807","nonce":1899064825}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6804","nonce":1899064825},{"type":"v1","addr":"192.168.123.104:6805","nonce":1899064825}]},"public_addr":"192.168.123.104:6801/1899064825","cluster_addr":"192.168.123.104:6803/1899064825","heartbeat_back_addr":"192.168.123.104:6807/1899064825","heartbeat_front_addr":"192.168.123.104:6805/1899064825","state":["exists","up"]},{"osd":3,"uuid":"d1d9774a-a921-4ff4-9d67-c8545864b268","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":25,"up_thru":50,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6808","nonce":1600567220},{"type":"v1","addr":"192.168.123.104:6809","nonce":1600567220}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6810","nonce":1600567220},{"type":"v1","addr":"192.168.123.104:6811","nonce":1600567220}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6814","nonce":1600567220},{"type":"v1","addr":"192.168.123.104:6815","nonce":1600567220}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6812","nonce":1600567220},{"type":"v1","addr":"192.168.123.104:6813","nonce":1600567220}]},"public_addr":"192.168.123.104:6809/1600567220","cluster_addr":"192.168.123.104:6811/1600567220","heartbeat_back_addr":"192.168.123.104:6815/1600567220","heartbeat_front_addr":"192.168.123.104:6813/1600567220","state":["exists","up"]},{"osd":4,"uuid":"97a3c763-32a2-413f-8d3f-0e7163f512ed","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":31,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6816","nonce":3814952582},{"type":"v1","addr":"192.168.123.104:6817","nonce":3814952582}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6818","nonce":3814952582},{"type":"v1","addr":"192.168.123.104:6819","nonce":3814952582}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6822","nonce":3814952582},{"type":"v1","addr":"192.168.123.104:6823","nonce":3814952582}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6820","nonce":3814952582},{"type":"v1","addr":"192.168.123.104:6821","nonce":3814952582}]},"public_addr":"192.168.123.104:6817/3814952582","cluster_addr":"192.168.123.104:6819/3814952582","heartbeat_back_addr":"192.168.123.104:6823/3814952582","heartbeat_front_addr":"192.168.123.104:6821/3814952582","state":["exists","up"]},{"osd":5,"uuid":"628905a2-37b8-4495-89ad-022957204832","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":37,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6800","nonce":67591369},{"type":"v1","addr":"192.168.123.105:6801","nonce":67591369}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6802","nonce":67591369},{"type":"v1","addr":"192.168.123.105:6803","nonce":67591369}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6806","nonce":67591369},{"type":"v1","addr":"192.168.123.105:6807","nonce":67591369}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6804","nonce":67591369},{"type":"v1","addr":"192.168.123.105:6805","nonce":67591369}]},"public_addr":"192.168.123.105:6801/67591369","cluster_addr":"192.168.123.105:6803/67591369","heartbeat_back_addr":"192.168.123.105:6807/67591369","heartbeat_front_addr":"192.168.123.105:6805/67591369","state":["exists","up"]},{"osd":6,"uuid":"bf677cce-a472-46ab-9a91-492f3b2e689b","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":43,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6808","nonce":1573740685},{"type":"v1","addr":"192.168.123.105:6809","nonce":1573740685}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6810","nonce":1573740685},{"type":"v1","addr":"192.168.123.105:6811","nonce":1573740685}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6814","nonce":1573740685},{"type":"v1","addr":"192.168.123.105:6815","nonce":1573740685}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6812","nonce":1573740685},{"type":"v1","addr":"192.168.123.105:6813","nonce":1573740685}]},"public_addr":"192.168.123.105:6809/1573740685","cluster_addr":"192.168.123.105:6811/1573740685","heartbeat_back_addr":"192.168.123.105:6815/1573740685","heartbeat_front_addr":"192.168.123.105:6813/1573740685","state":["exists","up"]},{"osd":7,"uuid":"377ff461-7194-48e3-8093-29ef296bd4de","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":48,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6816","nonce":3385956239},{"type":"v1","addr":"192.168.123.105:6817","nonce":3385956239}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6818","nonce":3385956239},{"type":"v1","addr":"192.168.123.105:6819","nonce":3385956239}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6822","nonce":3385956239},{"type":"v1","addr":"192.168.123.105:6823","nonce":3385956239}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6820","nonce":3385956239},{"type":"v1","addr":"192.168.123.105:6821","nonce":3385956239}]},"public_addr":"192.168.123.105:6817/3385956239","cluster_addr":"192.168.123.105:6819/3385956239","heartbeat_back_addr":"192.168.123.105:6823/3385956239","heartbeat_front_addr":"192.168.123.105:6821/3385956239","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:18:31.521402+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:19:04.025881+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:19:34.043331+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:20:06.342628+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:20:38.397649+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:21:08.079874+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:21:40.198080+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:22:12.612686+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T14:23:15.229 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph tell osd.0 flush_pg_stats 2026-03-09T14:23:15.229 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph tell osd.1 flush_pg_stats 2026-03-09T14:23:15.229 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph tell osd.2 flush_pg_stats 2026-03-09T14:23:15.229 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph tell osd.3 flush_pg_stats 2026-03-09T14:23:15.229 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph tell osd.4 flush_pg_stats 2026-03-09T14:23:15.229 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph tell osd.5 flush_pg_stats 2026-03-09T14:23:15.230 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph tell osd.6 flush_pg_stats 2026-03-09T14:23:15.230 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph tell osd.7 flush_pg_stats 2026-03-09T14:23:16.304 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:16 vm03 bash[17524]: cluster 2026-03-09T14:23:14.951621+0000 mgr.x (mgr.14150) 278 : cluster [DBG] pgmap v243: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:16.304 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:16 vm03 bash[17524]: cluster 2026-03-09T14:23:14.951621+0000 mgr.x (mgr.14150) 278 : cluster [DBG] pgmap v243: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:16.304 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:16 vm03 bash[17524]: audit 2026-03-09T14:23:15.165918+0000 mon.a (mon.0) 727 : audit [DBG] from='client.? 192.168.123.103:0/1583095528' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:23:16.304 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:16 vm03 bash[17524]: audit 2026-03-09T14:23:15.165918+0000 mon.a (mon.0) 727 : audit [DBG] from='client.? 192.168.123.103:0/1583095528' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:23:16.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:16 vm04 bash[19581]: cluster 2026-03-09T14:23:14.951621+0000 mgr.x (mgr.14150) 278 : cluster [DBG] pgmap v243: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:16.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:16 vm04 bash[19581]: cluster 2026-03-09T14:23:14.951621+0000 mgr.x (mgr.14150) 278 : cluster [DBG] pgmap v243: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:16.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:16 vm04 bash[19581]: audit 2026-03-09T14:23:15.165918+0000 mon.a (mon.0) 727 : audit [DBG] from='client.? 192.168.123.103:0/1583095528' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:23:16.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:16 vm04 bash[19581]: audit 2026-03-09T14:23:15.165918+0000 mon.a (mon.0) 727 : audit [DBG] from='client.? 192.168.123.103:0/1583095528' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:23:16.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:16 vm05 bash[20070]: cluster 2026-03-09T14:23:14.951621+0000 mgr.x (mgr.14150) 278 : cluster [DBG] pgmap v243: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:16.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:16 vm05 bash[20070]: cluster 2026-03-09T14:23:14.951621+0000 mgr.x (mgr.14150) 278 : cluster [DBG] pgmap v243: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:16.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:16 vm05 bash[20070]: audit 2026-03-09T14:23:15.165918+0000 mon.a (mon.0) 727 : audit [DBG] from='client.? 192.168.123.103:0/1583095528' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:23:16.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:16 vm05 bash[20070]: audit 2026-03-09T14:23:15.165918+0000 mon.a (mon.0) 727 : audit [DBG] from='client.? 192.168.123.103:0/1583095528' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:23:18.054 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:23:17 vm03 bash[37744]: debug there is no tcmu-runner data available 2026-03-09T14:23:18.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:18 vm04 bash[19581]: cluster 2026-03-09T14:23:16.951963+0000 mgr.x (mgr.14150) 279 : cluster [DBG] pgmap v244: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:18.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:18 vm04 bash[19581]: cluster 2026-03-09T14:23:16.951963+0000 mgr.x (mgr.14150) 279 : cluster [DBG] pgmap v244: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:18.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:18 vm05 bash[20070]: cluster 2026-03-09T14:23:16.951963+0000 mgr.x (mgr.14150) 279 : cluster [DBG] pgmap v244: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:18.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:18 vm05 bash[20070]: cluster 2026-03-09T14:23:16.951963+0000 mgr.x (mgr.14150) 279 : cluster [DBG] pgmap v244: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:18.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:18 vm03 bash[17524]: cluster 2026-03-09T14:23:16.951963+0000 mgr.x (mgr.14150) 279 : cluster [DBG] pgmap v244: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:18.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:18 vm03 bash[17524]: cluster 2026-03-09T14:23:16.951963+0000 mgr.x (mgr.14150) 279 : cluster [DBG] pgmap v244: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:19.008 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:23:18 vm05 bash[38699]: debug there is no tcmu-runner data available 2026-03-09T14:23:19.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:19 vm05 bash[20070]: audit 2026-03-09T14:23:17.653683+0000 mgr.x (mgr.14150) 280 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:19.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:19 vm05 bash[20070]: audit 2026-03-09T14:23:17.653683+0000 mgr.x (mgr.14150) 280 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:19.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:19 vm04 bash[19581]: audit 2026-03-09T14:23:17.653683+0000 mgr.x (mgr.14150) 280 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:19.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:19 vm04 bash[19581]: audit 2026-03-09T14:23:17.653683+0000 mgr.x (mgr.14150) 280 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:19.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:19 vm03 bash[17524]: audit 2026-03-09T14:23:17.653683+0000 mgr.x (mgr.14150) 280 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:19.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:19 vm03 bash[17524]: audit 2026-03-09T14:23:17.653683+0000 mgr.x (mgr.14150) 280 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:20.164 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:23:20.164 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:23:20.164 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:23:20.165 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:23:20.165 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:23:20.168 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:23:20.168 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:23:20.169 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:23:20.461 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:20 vm03 bash[17524]: audit 2026-03-09T14:23:18.621892+0000 mgr.x (mgr.14150) 281 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:20.462 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:20 vm03 bash[17524]: audit 2026-03-09T14:23:18.621892+0000 mgr.x (mgr.14150) 281 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:20.462 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:20 vm03 bash[17524]: cluster 2026-03-09T14:23:18.952235+0000 mgr.x (mgr.14150) 282 : cluster [DBG] pgmap v245: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:20.462 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:20 vm03 bash[17524]: cluster 2026-03-09T14:23:18.952235+0000 mgr.x (mgr.14150) 282 : cluster [DBG] pgmap v245: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:20.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:20 vm04 bash[19581]: audit 2026-03-09T14:23:18.621892+0000 mgr.x (mgr.14150) 281 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:20.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:20 vm04 bash[19581]: audit 2026-03-09T14:23:18.621892+0000 mgr.x (mgr.14150) 281 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:20.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:20 vm04 bash[19581]: cluster 2026-03-09T14:23:18.952235+0000 mgr.x (mgr.14150) 282 : cluster [DBG] pgmap v245: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:20.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:20 vm04 bash[19581]: cluster 2026-03-09T14:23:18.952235+0000 mgr.x (mgr.14150) 282 : cluster [DBG] pgmap v245: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:20.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:20 vm05 bash[20070]: audit 2026-03-09T14:23:18.621892+0000 mgr.x (mgr.14150) 281 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:20.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:20 vm05 bash[20070]: audit 2026-03-09T14:23:18.621892+0000 mgr.x (mgr.14150) 281 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:20.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:20 vm05 bash[20070]: cluster 2026-03-09T14:23:18.952235+0000 mgr.x (mgr.14150) 282 : cluster [DBG] pgmap v245: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:20.509 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:20 vm05 bash[20070]: cluster 2026-03-09T14:23:18.952235+0000 mgr.x (mgr.14150) 282 : cluster [DBG] pgmap v245: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:20.630 INFO:teuthology.orchestra.run.vm03.stdout:158913789979 2026-03-09T14:23:20.630 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph osd last-stat-seq osd.5 2026-03-09T14:23:20.711 INFO:teuthology.orchestra.run.vm03.stdout:77309411374 2026-03-09T14:23:20.711 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph osd last-stat-seq osd.2 2026-03-09T14:23:20.882 INFO:teuthology.orchestra.run.vm03.stdout:206158430224 2026-03-09T14:23:20.882 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph osd last-stat-seq osd.7 2026-03-09T14:23:21.075 INFO:teuthology.orchestra.run.vm03.stdout:107374182440 2026-03-09T14:23:21.076 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph osd last-stat-seq osd.3 2026-03-09T14:23:21.144 INFO:teuthology.orchestra.run.vm03.stdout:133143986211 2026-03-09T14:23:21.144 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph osd last-stat-seq osd.4 2026-03-09T14:23:21.231 INFO:teuthology.orchestra.run.vm03.stdout:55834574900 2026-03-09T14:23:21.231 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph osd last-stat-seq osd.1 2026-03-09T14:23:21.247 INFO:teuthology.orchestra.run.vm03.stdout:184683593749 2026-03-09T14:23:21.247 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph osd last-stat-seq osd.6 2026-03-09T14:23:21.250 INFO:teuthology.orchestra.run.vm03.stdout:34359738427 2026-03-09T14:23:21.250 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph osd last-stat-seq osd.0 2026-03-09T14:23:22.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:22 vm04 bash[19581]: cluster 2026-03-09T14:23:20.952560+0000 mgr.x (mgr.14150) 283 : cluster [DBG] pgmap v246: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.8 KiB/s rd, 2 op/s 2026-03-09T14:23:22.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:22 vm04 bash[19581]: cluster 2026-03-09T14:23:20.952560+0000 mgr.x (mgr.14150) 283 : cluster [DBG] pgmap v246: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.8 KiB/s rd, 2 op/s 2026-03-09T14:23:22.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:22 vm05 bash[20070]: cluster 2026-03-09T14:23:20.952560+0000 mgr.x (mgr.14150) 283 : cluster [DBG] pgmap v246: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.8 KiB/s rd, 2 op/s 2026-03-09T14:23:22.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:22 vm05 bash[20070]: cluster 2026-03-09T14:23:20.952560+0000 mgr.x (mgr.14150) 283 : cluster [DBG] pgmap v246: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.8 KiB/s rd, 2 op/s 2026-03-09T14:23:22.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:22 vm03 bash[17524]: cluster 2026-03-09T14:23:20.952560+0000 mgr.x (mgr.14150) 283 : cluster [DBG] pgmap v246: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.8 KiB/s rd, 2 op/s 2026-03-09T14:23:22.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:22 vm03 bash[17524]: cluster 2026-03-09T14:23:20.952560+0000 mgr.x (mgr.14150) 283 : cluster [DBG] pgmap v246: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.8 KiB/s rd, 2 op/s 2026-03-09T14:23:24.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:24 vm05 bash[20070]: cluster 2026-03-09T14:23:22.952892+0000 mgr.x (mgr.14150) 284 : cluster [DBG] pgmap v247: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.0 KiB/s rd, 1 op/s 2026-03-09T14:23:24.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:24 vm05 bash[20070]: cluster 2026-03-09T14:23:22.952892+0000 mgr.x (mgr.14150) 284 : cluster [DBG] pgmap v247: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.0 KiB/s rd, 1 op/s 2026-03-09T14:23:24.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:24 vm04 bash[19581]: cluster 2026-03-09T14:23:22.952892+0000 mgr.x (mgr.14150) 284 : cluster [DBG] pgmap v247: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.0 KiB/s rd, 1 op/s 2026-03-09T14:23:24.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:24 vm04 bash[19581]: cluster 2026-03-09T14:23:22.952892+0000 mgr.x (mgr.14150) 284 : cluster [DBG] pgmap v247: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.0 KiB/s rd, 1 op/s 2026-03-09T14:23:24.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:24 vm03 bash[17524]: cluster 2026-03-09T14:23:22.952892+0000 mgr.x (mgr.14150) 284 : cluster [DBG] pgmap v247: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.0 KiB/s rd, 1 op/s 2026-03-09T14:23:24.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:24 vm03 bash[17524]: cluster 2026-03-09T14:23:22.952892+0000 mgr.x (mgr.14150) 284 : cluster [DBG] pgmap v247: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.0 KiB/s rd, 1 op/s 2026-03-09T14:23:25.364 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:23:25.364 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:23:25.364 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:23:25.366 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:23:25.366 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:23:25.368 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:23:25.370 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:23:25.372 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:23:26.084 INFO:teuthology.orchestra.run.vm03.stdout:184683593750 2026-03-09T14:23:26.206 INFO:tasks.cephadm.ceph_manager.ceph:need seq 184683593749 got 184683593750 for osd.6 2026-03-09T14:23:26.206 DEBUG:teuthology.parallel:result is None 2026-03-09T14:23:26.255 INFO:teuthology.orchestra.run.vm03.stdout:34359738428 2026-03-09T14:23:26.293 INFO:teuthology.orchestra.run.vm03.stdout:55834574901 2026-03-09T14:23:26.326 INFO:teuthology.orchestra.run.vm03.stdout:107374182441 2026-03-09T14:23:26.366 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:26 vm03 bash[17524]: cluster 2026-03-09T14:23:24.953180+0000 mgr.x (mgr.14150) 285 : cluster [DBG] pgmap v248: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:26.367 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:26 vm03 bash[17524]: cluster 2026-03-09T14:23:24.953180+0000 mgr.x (mgr.14150) 285 : cluster [DBG] pgmap v248: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:26.367 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:26 vm03 bash[17524]: audit 2026-03-09T14:23:26.079521+0000 mon.a (mon.0) 728 : audit [DBG] from='client.? 192.168.123.103:0/1508356650' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-09T14:23:26.367 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:26 vm03 bash[17524]: audit 2026-03-09T14:23:26.079521+0000 mon.a (mon.0) 728 : audit [DBG] from='client.? 192.168.123.103:0/1508356650' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-09T14:23:26.367 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:26 vm03 bash[17524]: audit 2026-03-09T14:23:26.238932+0000 mon.b (mon.2) 21 : audit [DBG] from='client.? 192.168.123.103:0/333166682' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T14:23:26.367 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:26 vm03 bash[17524]: audit 2026-03-09T14:23:26.238932+0000 mon.b (mon.2) 21 : audit [DBG] from='client.? 192.168.123.103:0/333166682' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T14:23:26.367 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:26 vm03 bash[17524]: audit 2026-03-09T14:23:26.289709+0000 mon.a (mon.0) 729 : audit [DBG] from='client.? 192.168.123.103:0/1649787816' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T14:23:26.367 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:26 vm03 bash[17524]: audit 2026-03-09T14:23:26.289709+0000 mon.a (mon.0) 729 : audit [DBG] from='client.? 192.168.123.103:0/1649787816' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T14:23:26.381 INFO:teuthology.orchestra.run.vm03.stdout:133143986212 2026-03-09T14:23:26.381 INFO:teuthology.orchestra.run.vm03.stdout:158913789980 2026-03-09T14:23:26.392 INFO:tasks.cephadm.ceph_manager.ceph:need seq 34359738427 got 34359738428 for osd.0 2026-03-09T14:23:26.392 DEBUG:teuthology.parallel:result is None 2026-03-09T14:23:26.447 INFO:teuthology.orchestra.run.vm03.stdout:206158430224 2026-03-09T14:23:26.450 INFO:tasks.cephadm.ceph_manager.ceph:need seq 55834574900 got 55834574901 for osd.1 2026-03-09T14:23:26.450 DEBUG:teuthology.parallel:result is None 2026-03-09T14:23:26.490 INFO:teuthology.orchestra.run.vm03.stdout:77309411375 2026-03-09T14:23:26.551 INFO:tasks.cephadm.ceph_manager.ceph:need seq 158913789979 got 158913789980 for osd.5 2026-03-09T14:23:26.551 DEBUG:teuthology.parallel:result is None 2026-03-09T14:23:26.588 INFO:tasks.cephadm.ceph_manager.ceph:need seq 107374182440 got 107374182441 for osd.3 2026-03-09T14:23:26.588 DEBUG:teuthology.parallel:result is None 2026-03-09T14:23:26.590 INFO:tasks.cephadm.ceph_manager.ceph:need seq 133143986211 got 133143986212 for osd.4 2026-03-09T14:23:26.590 DEBUG:teuthology.parallel:result is None 2026-03-09T14:23:26.607 INFO:tasks.cephadm.ceph_manager.ceph:need seq 206158430224 got 206158430224 for osd.7 2026-03-09T14:23:26.607 DEBUG:teuthology.parallel:result is None 2026-03-09T14:23:26.616 INFO:tasks.cephadm.ceph_manager.ceph:need seq 77309411374 got 77309411375 for osd.2 2026-03-09T14:23:26.616 DEBUG:teuthology.parallel:result is None 2026-03-09T14:23:26.616 INFO:tasks.cephadm.ceph_manager.ceph:waiting for clean 2026-03-09T14:23:26.616 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph pg dump --format=json 2026-03-09T14:23:26.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:26 vm05 bash[20070]: cluster 2026-03-09T14:23:24.953180+0000 mgr.x (mgr.14150) 285 : cluster [DBG] pgmap v248: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:26.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:26 vm05 bash[20070]: cluster 2026-03-09T14:23:24.953180+0000 mgr.x (mgr.14150) 285 : cluster [DBG] pgmap v248: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:26.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:26 vm05 bash[20070]: audit 2026-03-09T14:23:26.079521+0000 mon.a (mon.0) 728 : audit [DBG] from='client.? 192.168.123.103:0/1508356650' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-09T14:23:26.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:26 vm05 bash[20070]: audit 2026-03-09T14:23:26.079521+0000 mon.a (mon.0) 728 : audit [DBG] from='client.? 192.168.123.103:0/1508356650' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-09T14:23:26.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:26 vm05 bash[20070]: audit 2026-03-09T14:23:26.238932+0000 mon.b (mon.2) 21 : audit [DBG] from='client.? 192.168.123.103:0/333166682' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T14:23:26.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:26 vm05 bash[20070]: audit 2026-03-09T14:23:26.238932+0000 mon.b (mon.2) 21 : audit [DBG] from='client.? 192.168.123.103:0/333166682' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T14:23:26.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:26 vm05 bash[20070]: audit 2026-03-09T14:23:26.289709+0000 mon.a (mon.0) 729 : audit [DBG] from='client.? 192.168.123.103:0/1649787816' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T14:23:26.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:26 vm05 bash[20070]: audit 2026-03-09T14:23:26.289709+0000 mon.a (mon.0) 729 : audit [DBG] from='client.? 192.168.123.103:0/1649787816' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T14:23:26.759 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:26 vm04 bash[19581]: cluster 2026-03-09T14:23:24.953180+0000 mgr.x (mgr.14150) 285 : cluster [DBG] pgmap v248: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:26.759 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:26 vm04 bash[19581]: cluster 2026-03-09T14:23:24.953180+0000 mgr.x (mgr.14150) 285 : cluster [DBG] pgmap v248: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:26.759 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:26 vm04 bash[19581]: audit 2026-03-09T14:23:26.079521+0000 mon.a (mon.0) 728 : audit [DBG] from='client.? 192.168.123.103:0/1508356650' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-09T14:23:26.759 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:26 vm04 bash[19581]: audit 2026-03-09T14:23:26.079521+0000 mon.a (mon.0) 728 : audit [DBG] from='client.? 192.168.123.103:0/1508356650' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-09T14:23:26.759 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:26 vm04 bash[19581]: audit 2026-03-09T14:23:26.238932+0000 mon.b (mon.2) 21 : audit [DBG] from='client.? 192.168.123.103:0/333166682' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T14:23:26.759 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:26 vm04 bash[19581]: audit 2026-03-09T14:23:26.238932+0000 mon.b (mon.2) 21 : audit [DBG] from='client.? 192.168.123.103:0/333166682' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T14:23:26.759 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:26 vm04 bash[19581]: audit 2026-03-09T14:23:26.289709+0000 mon.a (mon.0) 729 : audit [DBG] from='client.? 192.168.123.103:0/1649787816' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T14:23:26.759 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:26 vm04 bash[19581]: audit 2026-03-09T14:23:26.289709+0000 mon.a (mon.0) 729 : audit [DBG] from='client.? 192.168.123.103:0/1649787816' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T14:23:27.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:27 vm03 bash[17524]: audit 2026-03-09T14:23:26.326159+0000 mon.c (mon.1) 24 : audit [DBG] from='client.? 192.168.123.103:0/1569561671' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T14:23:27.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:27 vm03 bash[17524]: audit 2026-03-09T14:23:26.326159+0000 mon.c (mon.1) 24 : audit [DBG] from='client.? 192.168.123.103:0/1569561671' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T14:23:27.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:27 vm03 bash[17524]: audit 2026-03-09T14:23:26.377436+0000 mon.a (mon.0) 730 : audit [DBG] from='client.? 192.168.123.103:0/653775217' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T14:23:27.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:27 vm03 bash[17524]: audit 2026-03-09T14:23:26.377436+0000 mon.a (mon.0) 730 : audit [DBG] from='client.? 192.168.123.103:0/653775217' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T14:23:27.663 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:27 vm03 bash[17524]: audit 2026-03-09T14:23:26.378365+0000 mon.a (mon.0) 731 : audit [DBG] from='client.? 192.168.123.103:0/1834963552' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T14:23:27.663 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:27 vm03 bash[17524]: audit 2026-03-09T14:23:26.378365+0000 mon.a (mon.0) 731 : audit [DBG] from='client.? 192.168.123.103:0/1834963552' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T14:23:27.663 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:27 vm03 bash[17524]: audit 2026-03-09T14:23:26.442009+0000 mon.b (mon.2) 22 : audit [DBG] from='client.? 192.168.123.103:0/1964121220' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T14:23:27.663 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:27 vm03 bash[17524]: audit 2026-03-09T14:23:26.442009+0000 mon.b (mon.2) 22 : audit [DBG] from='client.? 192.168.123.103:0/1964121220' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T14:23:27.663 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:27 vm03 bash[17524]: audit 2026-03-09T14:23:26.485887+0000 mon.b (mon.2) 23 : audit [DBG] from='client.? 192.168.123.103:0/525501074' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T14:23:27.663 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:27 vm03 bash[17524]: audit 2026-03-09T14:23:26.485887+0000 mon.b (mon.2) 23 : audit [DBG] from='client.? 192.168.123.103:0/525501074' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T14:23:27.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:27 vm05 bash[20070]: audit 2026-03-09T14:23:26.326159+0000 mon.c (mon.1) 24 : audit [DBG] from='client.? 192.168.123.103:0/1569561671' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T14:23:27.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:27 vm05 bash[20070]: audit 2026-03-09T14:23:26.326159+0000 mon.c (mon.1) 24 : audit [DBG] from='client.? 192.168.123.103:0/1569561671' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T14:23:27.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:27 vm05 bash[20070]: audit 2026-03-09T14:23:26.377436+0000 mon.a (mon.0) 730 : audit [DBG] from='client.? 192.168.123.103:0/653775217' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T14:23:27.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:27 vm05 bash[20070]: audit 2026-03-09T14:23:26.377436+0000 mon.a (mon.0) 730 : audit [DBG] from='client.? 192.168.123.103:0/653775217' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T14:23:27.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:27 vm05 bash[20070]: audit 2026-03-09T14:23:26.378365+0000 mon.a (mon.0) 731 : audit [DBG] from='client.? 192.168.123.103:0/1834963552' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T14:23:27.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:27 vm05 bash[20070]: audit 2026-03-09T14:23:26.378365+0000 mon.a (mon.0) 731 : audit [DBG] from='client.? 192.168.123.103:0/1834963552' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T14:23:27.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:27 vm05 bash[20070]: audit 2026-03-09T14:23:26.442009+0000 mon.b (mon.2) 22 : audit [DBG] from='client.? 192.168.123.103:0/1964121220' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T14:23:27.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:27 vm05 bash[20070]: audit 2026-03-09T14:23:26.442009+0000 mon.b (mon.2) 22 : audit [DBG] from='client.? 192.168.123.103:0/1964121220' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T14:23:27.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:27 vm05 bash[20070]: audit 2026-03-09T14:23:26.485887+0000 mon.b (mon.2) 23 : audit [DBG] from='client.? 192.168.123.103:0/525501074' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T14:23:27.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:27 vm05 bash[20070]: audit 2026-03-09T14:23:26.485887+0000 mon.b (mon.2) 23 : audit [DBG] from='client.? 192.168.123.103:0/525501074' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T14:23:27.759 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:27 vm04 bash[19581]: audit 2026-03-09T14:23:26.326159+0000 mon.c (mon.1) 24 : audit [DBG] from='client.? 192.168.123.103:0/1569561671' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T14:23:27.759 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:27 vm04 bash[19581]: audit 2026-03-09T14:23:26.326159+0000 mon.c (mon.1) 24 : audit [DBG] from='client.? 192.168.123.103:0/1569561671' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T14:23:27.759 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:27 vm04 bash[19581]: audit 2026-03-09T14:23:26.377436+0000 mon.a (mon.0) 730 : audit [DBG] from='client.? 192.168.123.103:0/653775217' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T14:23:27.759 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:27 vm04 bash[19581]: audit 2026-03-09T14:23:26.377436+0000 mon.a (mon.0) 730 : audit [DBG] from='client.? 192.168.123.103:0/653775217' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T14:23:27.759 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:27 vm04 bash[19581]: audit 2026-03-09T14:23:26.378365+0000 mon.a (mon.0) 731 : audit [DBG] from='client.? 192.168.123.103:0/1834963552' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T14:23:27.759 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:27 vm04 bash[19581]: audit 2026-03-09T14:23:26.378365+0000 mon.a (mon.0) 731 : audit [DBG] from='client.? 192.168.123.103:0/1834963552' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T14:23:27.759 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:27 vm04 bash[19581]: audit 2026-03-09T14:23:26.442009+0000 mon.b (mon.2) 22 : audit [DBG] from='client.? 192.168.123.103:0/1964121220' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T14:23:27.759 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:27 vm04 bash[19581]: audit 2026-03-09T14:23:26.442009+0000 mon.b (mon.2) 22 : audit [DBG] from='client.? 192.168.123.103:0/1964121220' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T14:23:27.759 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:27 vm04 bash[19581]: audit 2026-03-09T14:23:26.485887+0000 mon.b (mon.2) 23 : audit [DBG] from='client.? 192.168.123.103:0/525501074' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T14:23:27.759 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:27 vm04 bash[19581]: audit 2026-03-09T14:23:26.485887+0000 mon.b (mon.2) 23 : audit [DBG] from='client.? 192.168.123.103:0/525501074' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T14:23:28.054 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:23:27 vm03 bash[37744]: debug there is no tcmu-runner data available 2026-03-09T14:23:28.632 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:28 vm05 bash[20070]: cluster 2026-03-09T14:23:26.953447+0000 mgr.x (mgr.14150) 286 : cluster [DBG] pgmap v249: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:28.632 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:28 vm05 bash[20070]: cluster 2026-03-09T14:23:26.953447+0000 mgr.x (mgr.14150) 286 : cluster [DBG] pgmap v249: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:28.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:28 vm04 bash[19581]: cluster 2026-03-09T14:23:26.953447+0000 mgr.x (mgr.14150) 286 : cluster [DBG] pgmap v249: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:28.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:28 vm04 bash[19581]: cluster 2026-03-09T14:23:26.953447+0000 mgr.x (mgr.14150) 286 : cluster [DBG] pgmap v249: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:28.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:28 vm03 bash[17524]: cluster 2026-03-09T14:23:26.953447+0000 mgr.x (mgr.14150) 286 : cluster [DBG] pgmap v249: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:28.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:28 vm03 bash[17524]: cluster 2026-03-09T14:23:26.953447+0000 mgr.x (mgr.14150) 286 : cluster [DBG] pgmap v249: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:29.008 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:23:28 vm05 bash[38699]: debug there is no tcmu-runner data available 2026-03-09T14:23:29.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:29 vm04 bash[19581]: audit 2026-03-09T14:23:27.661754+0000 mgr.x (mgr.14150) 287 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:29.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:29 vm04 bash[19581]: audit 2026-03-09T14:23:27.661754+0000 mgr.x (mgr.14150) 287 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:29.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:29 vm05 bash[20070]: audit 2026-03-09T14:23:27.661754+0000 mgr.x (mgr.14150) 287 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:29.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:29 vm05 bash[20070]: audit 2026-03-09T14:23:27.661754+0000 mgr.x (mgr.14150) 287 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:29.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:29 vm03 bash[17524]: audit 2026-03-09T14:23:27.661754+0000 mgr.x (mgr.14150) 287 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:29.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:29 vm03 bash[17524]: audit 2026-03-09T14:23:27.661754+0000 mgr.x (mgr.14150) 287 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:30.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:30 vm05 bash[20070]: audit 2026-03-09T14:23:28.631459+0000 mgr.x (mgr.14150) 288 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:30.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:30 vm05 bash[20070]: audit 2026-03-09T14:23:28.631459+0000 mgr.x (mgr.14150) 288 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:30.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:30 vm05 bash[20070]: cluster 2026-03-09T14:23:28.953705+0000 mgr.x (mgr.14150) 289 : cluster [DBG] pgmap v250: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:30.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:30 vm05 bash[20070]: cluster 2026-03-09T14:23:28.953705+0000 mgr.x (mgr.14150) 289 : cluster [DBG] pgmap v250: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:30.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:30 vm04 bash[19581]: audit 2026-03-09T14:23:28.631459+0000 mgr.x (mgr.14150) 288 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:30.759 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:30 vm04 bash[19581]: audit 2026-03-09T14:23:28.631459+0000 mgr.x (mgr.14150) 288 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:30.759 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:30 vm04 bash[19581]: cluster 2026-03-09T14:23:28.953705+0000 mgr.x (mgr.14150) 289 : cluster [DBG] pgmap v250: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:30.759 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:30 vm04 bash[19581]: cluster 2026-03-09T14:23:28.953705+0000 mgr.x (mgr.14150) 289 : cluster [DBG] pgmap v250: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:30.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:30 vm03 bash[17524]: audit 2026-03-09T14:23:28.631459+0000 mgr.x (mgr.14150) 288 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:30.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:30 vm03 bash[17524]: audit 2026-03-09T14:23:28.631459+0000 mgr.x (mgr.14150) 288 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:30.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:30 vm03 bash[17524]: cluster 2026-03-09T14:23:28.953705+0000 mgr.x (mgr.14150) 289 : cluster [DBG] pgmap v250: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:30.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:30 vm03 bash[17524]: cluster 2026-03-09T14:23:28.953705+0000 mgr.x (mgr.14150) 289 : cluster [DBG] pgmap v250: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:31.278 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:23:31.534 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:23:31.534 INFO:teuthology.orchestra.run.vm03.stderr:dumped all 2026-03-09T14:23:31.597 INFO:teuthology.orchestra.run.vm03.stdout:{"pg_ready":true,"pg_map":{"version":251,"stamp":"2026-03-09T14:23:30.953846+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459688,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":135,"num_read_kb":120,"num_write":63,"num_write_kb":587,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":4,"num_bytes_recovered":918560,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":41,"ondisk_log_size":41,"up":12,"acting":12,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":12,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":6,"kb":167739392,"kb_used":221372,"kb_used_data":6596,"kb_used_omap":12,"kb_used_meta":214515,"kb_avail":167518020,"statfs":{"total":171765137408,"available":171538452480,"internally_reserved":0,"allocated":6754304,"data_stored":3651928,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":12711,"internal_metadata":219663961},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":30,"num_read_kb":30,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"12.001731"},"pg_stats":[{"pgid":"2.2","version":"55'2","reported_seq":48,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T14:22:57.506474+0000","last_change":"2026-03-09T14:22:41.099803+0000","last_active":"2026-03-09T14:22:57.506474+0000","last_peered":"2026-03-09T14:22:57.506474+0000","last_clean":"2026-03-09T14:22:57.506474+0000","last_became_active":"2026-03-09T14:22:35.268881+0000","last_became_peered":"2026-03-09T14:22:35.268881+0000","last_unstale":"2026-03-09T14:22:57.506474+0000","last_undegraded":"2026-03-09T14:22:57.506474+0000","last_fullsized":"2026-03-09T14:22:57.506474+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:22:34.035258+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:22:34.035258+0000","last_clean_scrub_stamp":"2026-03-09T14:22:34.035258+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:44:45.013243+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00053685199999999999,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,2],"acting":[3,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.1","version":"53'1","reported_seq":46,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T14:22:57.420503+0000","last_change":"2026-03-09T14:22:41.096971+0000","last_active":"2026-03-09T14:22:57.420503+0000","last_peered":"2026-03-09T14:22:57.420503+0000","last_clean":"2026-03-09T14:22:57.420503+0000","last_became_active":"2026-03-09T14:22:35.326198+0000","last_became_peered":"2026-03-09T14:22:35.326198+0000","last_unstale":"2026-03-09T14:22:57.420503+0000","last_undegraded":"2026-03-09T14:22:57.420503+0000","last_fullsized":"2026-03-09T14:22:57.420503+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:22:34.035258+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:22:34.035258+0000","last_clean_scrub_stamp":"2026-03-09T14:22:34.035258+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:52:58.550647+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000176751,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,1,0],"acting":[2,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"2.0","version":"56'6","reported_seq":139,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T14:23:28.662526+0000","last_change":"2026-03-09T14:22:41.099299+0000","last_active":"2026-03-09T14:23:28.662526+0000","last_peered":"2026-03-09T14:23:28.662526+0000","last_clean":"2026-03-09T14:23:28.662526+0000","last_became_active":"2026-03-09T14:22:35.067110+0000","last_became_peered":"2026-03-09T14:22:35.067110+0000","last_unstale":"2026-03-09T14:23:28.662526+0000","last_undegraded":"2026-03-09T14:23:28.662526+0000","last_fullsized":"2026-03-09T14:23:28.662526+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:22:34.035258+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:22:34.035258+0000","last_clean_scrub_stamp":"2026-03-09T14:22:34.035258+0000","objects_scrubbed":0,"log_size":6,"log_dups_size":0,"ondisk_log_size":6,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:15:07.137680+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000182442,"stat_sum":{"num_bytes":389,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":89,"num_read_kb":83,"num_write":4,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,5,6],"acting":[3,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"1.0","version":"21'32","reported_seq":100,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T14:22:57.506556+0000","last_change":"2026-03-09T14:21:13.263003+0000","last_active":"2026-03-09T14:22:57.506556+0000","last_peered":"2026-03-09T14:22:57.506556+0000","last_clean":"2026-03-09T14:22:57.506556+0000","last_became_active":"2026-03-09T14:21:12.955080+0000","last_became_peered":"2026-03-09T14:21:12.955080+0000","last_unstale":"2026-03-09T14:22:57.506556+0000","last_undegraded":"2026-03-09T14:22:57.506556+0000","last_fullsized":"2026-03-09T14:22:57.506556+0000","mapping_epoch":38,"log_start":"0'0","ondisk_log_start":"0'0","created":19,"last_epoch_clean":39,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:19:37.045123+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:19:37.045123+0000","last_clean_scrub_stamp":"2026-03-09T14:19:37.045123+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:17:29.873524+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":4,"num_bytes_recovered":918560,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,5,2],"acting":[3,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]}],"pool_stats":[{"poolid":2,"num_pg":3,"stat_sum":{"num_bytes":408,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":89,"num_read_kb":83,"num_write":6,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1224,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":9,"ondisk_log_size":9,"up":9,"acting":9,"num_store_stats":6},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":4,"num_bytes_recovered":918560,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":5}],"osd_stats":[{"osd":7,"up_from":48,"seq":206158430226,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27500,"kb_used_data":652,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939924,"statfs":{"total":21470642176,"available":21442482176,"internally_reserved":0,"allocated":667648,"data_stored":284108,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1585,"internal_metadata":27457999},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":6,"up_from":43,"seq":184683593751,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27500,"kb_used_data":656,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939924,"statfs":{"total":21470642176,"available":21442482176,"internally_reserved":0,"allocated":671744,"data_stored":284497,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":5,"up_from":37,"seq":158913789981,"num_pgs":2,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27952,"kb_used_data":1108,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939472,"statfs":{"total":21470642176,"available":21442019328,"internally_reserved":0,"allocated":1134592,"data_stored":743777,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":31,"seq":133143986213,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27500,"kb_used_data":652,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939924,"statfs":{"total":21470642176,"available":21442482176,"internally_reserved":0,"allocated":667648,"data_stored":284108,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":3,"up_from":25,"seq":107374182442,"num_pgs":3,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27964,"kb_used_data":1112,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939460,"statfs":{"total":21470642176,"available":21442007040,"internally_reserved":0,"allocated":1138688,"data_stored":743796,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1588,"internal_metadata":27457996},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":18,"seq":77309411376,"num_pgs":3,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27952,"kb_used_data":1108,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939472,"statfs":{"total":21470642176,"available":21442019328,"internally_reserved":0,"allocated":1134592,"data_stored":743407,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":13,"seq":55834574902,"num_pgs":2,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27504,"kb_used_data":656,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939920,"statfs":{"total":21470642176,"available":21442478080,"internally_reserved":0,"allocated":671744,"data_stored":284127,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738429,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27500,"kb_used_data":652,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939924,"statfs":{"total":21470642176,"available":21442482176,"internally_reserved":0,"allocated":667648,"data_stored":284108,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":408,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-09T14:23:31.597 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph pg dump --format=json 2026-03-09T14:23:32.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:32 vm05 bash[20070]: cluster 2026-03-09T14:23:30.953999+0000 mgr.x (mgr.14150) 290 : cluster [DBG] pgmap v251: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:32.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:32 vm05 bash[20070]: cluster 2026-03-09T14:23:30.953999+0000 mgr.x (mgr.14150) 290 : cluster [DBG] pgmap v251: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:32.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:32 vm05 bash[20070]: audit 2026-03-09T14:23:31.532908+0000 mgr.x (mgr.14150) 291 : audit [DBG] from='client.14637 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T14:23:32.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:32 vm05 bash[20070]: audit 2026-03-09T14:23:31.532908+0000 mgr.x (mgr.14150) 291 : audit [DBG] from='client.14637 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T14:23:32.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:32 vm04 bash[19581]: cluster 2026-03-09T14:23:30.953999+0000 mgr.x (mgr.14150) 290 : cluster [DBG] pgmap v251: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:32.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:32 vm04 bash[19581]: cluster 2026-03-09T14:23:30.953999+0000 mgr.x (mgr.14150) 290 : cluster [DBG] pgmap v251: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:32.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:32 vm04 bash[19581]: audit 2026-03-09T14:23:31.532908+0000 mgr.x (mgr.14150) 291 : audit [DBG] from='client.14637 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T14:23:32.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:32 vm04 bash[19581]: audit 2026-03-09T14:23:31.532908+0000 mgr.x (mgr.14150) 291 : audit [DBG] from='client.14637 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T14:23:32.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:32 vm03 bash[17524]: cluster 2026-03-09T14:23:30.953999+0000 mgr.x (mgr.14150) 290 : cluster [DBG] pgmap v251: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:32.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:32 vm03 bash[17524]: cluster 2026-03-09T14:23:30.953999+0000 mgr.x (mgr.14150) 290 : cluster [DBG] pgmap v251: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:32.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:32 vm03 bash[17524]: audit 2026-03-09T14:23:31.532908+0000 mgr.x (mgr.14150) 291 : audit [DBG] from='client.14637 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T14:23:32.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:32 vm03 bash[17524]: audit 2026-03-09T14:23:31.532908+0000 mgr.x (mgr.14150) 291 : audit [DBG] from='client.14637 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T14:23:34.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:34 vm05 bash[20070]: cluster 2026-03-09T14:23:32.954327+0000 mgr.x (mgr.14150) 292 : cluster [DBG] pgmap v252: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T14:23:34.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:34 vm05 bash[20070]: cluster 2026-03-09T14:23:32.954327+0000 mgr.x (mgr.14150) 292 : cluster [DBG] pgmap v252: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T14:23:34.759 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:34 vm04 bash[19581]: cluster 2026-03-09T14:23:32.954327+0000 mgr.x (mgr.14150) 292 : cluster [DBG] pgmap v252: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T14:23:34.759 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:34 vm04 bash[19581]: cluster 2026-03-09T14:23:32.954327+0000 mgr.x (mgr.14150) 292 : cluster [DBG] pgmap v252: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T14:23:34.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:34 vm03 bash[17524]: cluster 2026-03-09T14:23:32.954327+0000 mgr.x (mgr.14150) 292 : cluster [DBG] pgmap v252: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T14:23:34.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:34 vm03 bash[17524]: cluster 2026-03-09T14:23:32.954327+0000 mgr.x (mgr.14150) 292 : cluster [DBG] pgmap v252: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T14:23:35.296 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:23:35.556 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:23:35.556 INFO:teuthology.orchestra.run.vm03.stderr:dumped all 2026-03-09T14:23:35.616 INFO:teuthology.orchestra.run.vm03.stdout:{"pg_ready":true,"pg_map":{"version":253,"stamp":"2026-03-09T14:23:34.954487+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459688,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":145,"num_read_kb":130,"num_write":63,"num_write_kb":587,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":4,"num_bytes_recovered":918560,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":41,"ondisk_log_size":41,"up":12,"acting":12,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":12,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":6,"kb":167739392,"kb_used":221372,"kb_used_data":6596,"kb_used_omap":12,"kb_used_meta":214515,"kb_avail":167518020,"statfs":{"total":171765137408,"available":171538452480,"internally_reserved":0,"allocated":6754304,"data_stored":3651928,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":12711,"internal_metadata":219663961},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":26,"num_read_kb":26,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"12.001784"},"pg_stats":[{"pgid":"2.2","version":"55'2","reported_seq":48,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T14:22:57.506474+0000","last_change":"2026-03-09T14:22:41.099803+0000","last_active":"2026-03-09T14:22:57.506474+0000","last_peered":"2026-03-09T14:22:57.506474+0000","last_clean":"2026-03-09T14:22:57.506474+0000","last_became_active":"2026-03-09T14:22:35.268881+0000","last_became_peered":"2026-03-09T14:22:35.268881+0000","last_unstale":"2026-03-09T14:22:57.506474+0000","last_undegraded":"2026-03-09T14:22:57.506474+0000","last_fullsized":"2026-03-09T14:22:57.506474+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:22:34.035258+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:22:34.035258+0000","last_clean_scrub_stamp":"2026-03-09T14:22:34.035258+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:44:45.013243+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00053685199999999999,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,2],"acting":[3,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.1","version":"53'1","reported_seq":46,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T14:22:57.420503+0000","last_change":"2026-03-09T14:22:41.096971+0000","last_active":"2026-03-09T14:22:57.420503+0000","last_peered":"2026-03-09T14:22:57.420503+0000","last_clean":"2026-03-09T14:22:57.420503+0000","last_became_active":"2026-03-09T14:22:35.326198+0000","last_became_peered":"2026-03-09T14:22:35.326198+0000","last_unstale":"2026-03-09T14:22:57.420503+0000","last_undegraded":"2026-03-09T14:22:57.420503+0000","last_fullsized":"2026-03-09T14:22:57.420503+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:22:34.035258+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:22:34.035258+0000","last_clean_scrub_stamp":"2026-03-09T14:22:34.035258+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:52:58.550647+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000176751,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,1,0],"acting":[2,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"2.0","version":"56'6","reported_seq":149,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T14:23:33.670352+0000","last_change":"2026-03-09T14:22:41.099299+0000","last_active":"2026-03-09T14:23:33.670352+0000","last_peered":"2026-03-09T14:23:33.670352+0000","last_clean":"2026-03-09T14:23:33.670352+0000","last_became_active":"2026-03-09T14:22:35.067110+0000","last_became_peered":"2026-03-09T14:22:35.067110+0000","last_unstale":"2026-03-09T14:23:33.670352+0000","last_undegraded":"2026-03-09T14:23:33.670352+0000","last_fullsized":"2026-03-09T14:23:33.670352+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:22:34.035258+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:22:34.035258+0000","last_clean_scrub_stamp":"2026-03-09T14:22:34.035258+0000","objects_scrubbed":0,"log_size":6,"log_dups_size":0,"ondisk_log_size":6,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:15:07.137680+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000182442,"stat_sum":{"num_bytes":389,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":99,"num_read_kb":93,"num_write":4,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,5,6],"acting":[3,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"1.0","version":"21'32","reported_seq":100,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-09T14:22:57.506556+0000","last_change":"2026-03-09T14:21:13.263003+0000","last_active":"2026-03-09T14:22:57.506556+0000","last_peered":"2026-03-09T14:22:57.506556+0000","last_clean":"2026-03-09T14:22:57.506556+0000","last_became_active":"2026-03-09T14:21:12.955080+0000","last_became_peered":"2026-03-09T14:21:12.955080+0000","last_unstale":"2026-03-09T14:22:57.506556+0000","last_undegraded":"2026-03-09T14:22:57.506556+0000","last_fullsized":"2026-03-09T14:22:57.506556+0000","mapping_epoch":38,"log_start":"0'0","ondisk_log_start":"0'0","created":19,"last_epoch_clean":39,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:19:37.045123+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:19:37.045123+0000","last_clean_scrub_stamp":"2026-03-09T14:19:37.045123+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:17:29.873524+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":4,"num_bytes_recovered":918560,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,5,2],"acting":[3,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]}],"pool_stats":[{"poolid":2,"num_pg":3,"stat_sum":{"num_bytes":408,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":99,"num_read_kb":93,"num_write":6,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1224,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":9,"ondisk_log_size":9,"up":9,"acting":9,"num_store_stats":6},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":4,"num_bytes_recovered":918560,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":5}],"osd_stats":[{"osd":7,"up_from":48,"seq":206158430226,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27500,"kb_used_data":652,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939924,"statfs":{"total":21470642176,"available":21442482176,"internally_reserved":0,"allocated":667648,"data_stored":284108,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1585,"internal_metadata":27457999},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":6,"up_from":43,"seq":184683593752,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27500,"kb_used_data":656,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939924,"statfs":{"total":21470642176,"available":21442482176,"internally_reserved":0,"allocated":671744,"data_stored":284497,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":5,"up_from":37,"seq":158913789982,"num_pgs":2,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27952,"kb_used_data":1108,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939472,"statfs":{"total":21470642176,"available":21442019328,"internally_reserved":0,"allocated":1134592,"data_stored":743777,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":31,"seq":133143986214,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27500,"kb_used_data":652,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939924,"statfs":{"total":21470642176,"available":21442482176,"internally_reserved":0,"allocated":667648,"data_stored":284108,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":3,"up_from":25,"seq":107374182443,"num_pgs":3,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27964,"kb_used_data":1112,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939460,"statfs":{"total":21470642176,"available":21442007040,"internally_reserved":0,"allocated":1138688,"data_stored":743796,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1588,"internal_metadata":27457996},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":18,"seq":77309411377,"num_pgs":3,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27952,"kb_used_data":1108,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939472,"statfs":{"total":21470642176,"available":21442019328,"internally_reserved":0,"allocated":1134592,"data_stored":743407,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":13,"seq":55834574903,"num_pgs":2,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27504,"kb_used_data":656,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939920,"statfs":{"total":21470642176,"available":21442478080,"internally_reserved":0,"allocated":671744,"data_stored":284127,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738430,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27500,"kb_used_data":652,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939924,"statfs":{"total":21470642176,"available":21442482176,"internally_reserved":0,"allocated":667648,"data_stored":284108,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":408,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-09T14:23:35.617 INFO:tasks.cephadm.ceph_manager.ceph:clean! 2026-03-09T14:23:35.617 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-09T14:23:35.617 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy 2026-03-09T14:23:35.617 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph health --format=json 2026-03-09T14:23:36.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:36 vm05 bash[20070]: cluster 2026-03-09T14:23:34.954637+0000 mgr.x (mgr.14150) 293 : cluster [DBG] pgmap v253: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.2 KiB/s rd, 2 op/s 2026-03-09T14:23:36.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:36 vm05 bash[20070]: cluster 2026-03-09T14:23:34.954637+0000 mgr.x (mgr.14150) 293 : cluster [DBG] pgmap v253: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.2 KiB/s rd, 2 op/s 2026-03-09T14:23:36.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:36 vm05 bash[20070]: audit 2026-03-09T14:23:35.554772+0000 mgr.x (mgr.14150) 294 : audit [DBG] from='client.14643 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T14:23:36.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:36 vm05 bash[20070]: audit 2026-03-09T14:23:35.554772+0000 mgr.x (mgr.14150) 294 : audit [DBG] from='client.14643 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T14:23:36.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:36 vm04 bash[19581]: cluster 2026-03-09T14:23:34.954637+0000 mgr.x (mgr.14150) 293 : cluster [DBG] pgmap v253: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.2 KiB/s rd, 2 op/s 2026-03-09T14:23:36.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:36 vm04 bash[19581]: cluster 2026-03-09T14:23:34.954637+0000 mgr.x (mgr.14150) 293 : cluster [DBG] pgmap v253: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.2 KiB/s rd, 2 op/s 2026-03-09T14:23:36.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:36 vm04 bash[19581]: audit 2026-03-09T14:23:35.554772+0000 mgr.x (mgr.14150) 294 : audit [DBG] from='client.14643 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T14:23:36.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:36 vm04 bash[19581]: audit 2026-03-09T14:23:35.554772+0000 mgr.x (mgr.14150) 294 : audit [DBG] from='client.14643 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T14:23:36.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:36 vm03 bash[17524]: cluster 2026-03-09T14:23:34.954637+0000 mgr.x (mgr.14150) 293 : cluster [DBG] pgmap v253: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.2 KiB/s rd, 2 op/s 2026-03-09T14:23:36.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:36 vm03 bash[17524]: cluster 2026-03-09T14:23:34.954637+0000 mgr.x (mgr.14150) 293 : cluster [DBG] pgmap v253: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.2 KiB/s rd, 2 op/s 2026-03-09T14:23:36.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:36 vm03 bash[17524]: audit 2026-03-09T14:23:35.554772+0000 mgr.x (mgr.14150) 294 : audit [DBG] from='client.14643 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T14:23:36.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:36 vm03 bash[17524]: audit 2026-03-09T14:23:35.554772+0000 mgr.x (mgr.14150) 294 : audit [DBG] from='client.14643 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T14:23:38.054 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:23:37 vm03 bash[37744]: debug there is no tcmu-runner data available 2026-03-09T14:23:38.758 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:23:38 vm05 bash[38699]: debug there is no tcmu-runner data available 2026-03-09T14:23:38.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:38 vm05 bash[20070]: cluster 2026-03-09T14:23:36.954925+0000 mgr.x (mgr.14150) 295 : cluster [DBG] pgmap v254: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:38.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:38 vm05 bash[20070]: cluster 2026-03-09T14:23:36.954925+0000 mgr.x (mgr.14150) 295 : cluster [DBG] pgmap v254: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:38.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:38 vm04 bash[19581]: cluster 2026-03-09T14:23:36.954925+0000 mgr.x (mgr.14150) 295 : cluster [DBG] pgmap v254: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:38.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:38 vm04 bash[19581]: cluster 2026-03-09T14:23:36.954925+0000 mgr.x (mgr.14150) 295 : cluster [DBG] pgmap v254: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:38.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:38 vm03 bash[17524]: cluster 2026-03-09T14:23:36.954925+0000 mgr.x (mgr.14150) 295 : cluster [DBG] pgmap v254: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:38.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:38 vm03 bash[17524]: cluster 2026-03-09T14:23:36.954925+0000 mgr.x (mgr.14150) 295 : cluster [DBG] pgmap v254: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:39.313 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:23:39.587 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:23:39.587 INFO:teuthology.orchestra.run.vm03.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-09T14:23:39.597 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:39 vm03 bash[17524]: audit 2026-03-09T14:23:37.672463+0000 mgr.x (mgr.14150) 296 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:39.597 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:39 vm03 bash[17524]: audit 2026-03-09T14:23:37.672463+0000 mgr.x (mgr.14150) 296 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:39.640 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy done 2026-03-09T14:23:39.640 INFO:tasks.cephadm:Setup complete, yielding 2026-03-09T14:23:39.640 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-09T14:23:39.642 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm03.local 2026-03-09T14:23:39.642 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- bash -c 'ceph orch status' 2026-03-09T14:23:39.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:39 vm05 bash[20070]: audit 2026-03-09T14:23:37.672463+0000 mgr.x (mgr.14150) 296 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:39.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:39 vm05 bash[20070]: audit 2026-03-09T14:23:37.672463+0000 mgr.x (mgr.14150) 296 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:39.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:39 vm04 bash[19581]: audit 2026-03-09T14:23:37.672463+0000 mgr.x (mgr.14150) 296 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:39.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:39 vm04 bash[19581]: audit 2026-03-09T14:23:37.672463+0000 mgr.x (mgr.14150) 296 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:40.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:40 vm05 bash[20070]: audit 2026-03-09T14:23:38.642272+0000 mgr.x (mgr.14150) 297 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:40.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:40 vm05 bash[20070]: audit 2026-03-09T14:23:38.642272+0000 mgr.x (mgr.14150) 297 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:40.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:40 vm05 bash[20070]: cluster 2026-03-09T14:23:38.955211+0000 mgr.x (mgr.14150) 298 : cluster [DBG] pgmap v255: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:40.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:40 vm05 bash[20070]: cluster 2026-03-09T14:23:38.955211+0000 mgr.x (mgr.14150) 298 : cluster [DBG] pgmap v255: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:40.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:40 vm05 bash[20070]: audit 2026-03-09T14:23:39.586077+0000 mon.b (mon.2) 24 : audit [DBG] from='client.? 192.168.123.103:0/3797286656' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T14:23:40.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:40 vm05 bash[20070]: audit 2026-03-09T14:23:39.586077+0000 mon.b (mon.2) 24 : audit [DBG] from='client.? 192.168.123.103:0/3797286656' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T14:23:40.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:40 vm04 bash[19581]: audit 2026-03-09T14:23:38.642272+0000 mgr.x (mgr.14150) 297 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:40.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:40 vm04 bash[19581]: audit 2026-03-09T14:23:38.642272+0000 mgr.x (mgr.14150) 297 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:40.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:40 vm04 bash[19581]: cluster 2026-03-09T14:23:38.955211+0000 mgr.x (mgr.14150) 298 : cluster [DBG] pgmap v255: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:40.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:40 vm04 bash[19581]: cluster 2026-03-09T14:23:38.955211+0000 mgr.x (mgr.14150) 298 : cluster [DBG] pgmap v255: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:40.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:40 vm04 bash[19581]: audit 2026-03-09T14:23:39.586077+0000 mon.b (mon.2) 24 : audit [DBG] from='client.? 192.168.123.103:0/3797286656' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T14:23:40.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:40 vm04 bash[19581]: audit 2026-03-09T14:23:39.586077+0000 mon.b (mon.2) 24 : audit [DBG] from='client.? 192.168.123.103:0/3797286656' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T14:23:40.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:40 vm03 bash[17524]: audit 2026-03-09T14:23:38.642272+0000 mgr.x (mgr.14150) 297 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:40.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:40 vm03 bash[17524]: audit 2026-03-09T14:23:38.642272+0000 mgr.x (mgr.14150) 297 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:40.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:40 vm03 bash[17524]: cluster 2026-03-09T14:23:38.955211+0000 mgr.x (mgr.14150) 298 : cluster [DBG] pgmap v255: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:40.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:40 vm03 bash[17524]: cluster 2026-03-09T14:23:38.955211+0000 mgr.x (mgr.14150) 298 : cluster [DBG] pgmap v255: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:40.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:40 vm03 bash[17524]: audit 2026-03-09T14:23:39.586077+0000 mon.b (mon.2) 24 : audit [DBG] from='client.? 192.168.123.103:0/3797286656' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T14:23:40.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:40 vm03 bash[17524]: audit 2026-03-09T14:23:39.586077+0000 mon.b (mon.2) 24 : audit [DBG] from='client.? 192.168.123.103:0/3797286656' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T14:23:42.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:42 vm05 bash[20070]: cluster 2026-03-09T14:23:40.955594+0000 mgr.x (mgr.14150) 299 : cluster [DBG] pgmap v256: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:42.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:42 vm05 bash[20070]: cluster 2026-03-09T14:23:40.955594+0000 mgr.x (mgr.14150) 299 : cluster [DBG] pgmap v256: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:42.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:42 vm04 bash[19581]: cluster 2026-03-09T14:23:40.955594+0000 mgr.x (mgr.14150) 299 : cluster [DBG] pgmap v256: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:42.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:42 vm04 bash[19581]: cluster 2026-03-09T14:23:40.955594+0000 mgr.x (mgr.14150) 299 : cluster [DBG] pgmap v256: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:42.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:42 vm03 bash[17524]: cluster 2026-03-09T14:23:40.955594+0000 mgr.x (mgr.14150) 299 : cluster [DBG] pgmap v256: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:42.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:42 vm03 bash[17524]: cluster 2026-03-09T14:23:40.955594+0000 mgr.x (mgr.14150) 299 : cluster [DBG] pgmap v256: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:43.332 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:23:43.610 INFO:teuthology.orchestra.run.vm03.stdout:Backend: cephadm 2026-03-09T14:23:43.610 INFO:teuthology.orchestra.run.vm03.stdout:Available: Yes 2026-03-09T14:23:43.610 INFO:teuthology.orchestra.run.vm03.stdout:Paused: No 2026-03-09T14:23:43.675 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- bash -c 'ceph orch ps' 2026-03-09T14:23:44.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:44 vm05 bash[20070]: cluster 2026-03-09T14:23:42.955938+0000 mgr.x (mgr.14150) 300 : cluster [DBG] pgmap v257: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:44.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:44 vm05 bash[20070]: cluster 2026-03-09T14:23:42.955938+0000 mgr.x (mgr.14150) 300 : cluster [DBG] pgmap v257: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:44.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:44 vm04 bash[19581]: cluster 2026-03-09T14:23:42.955938+0000 mgr.x (mgr.14150) 300 : cluster [DBG] pgmap v257: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:44.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:44 vm04 bash[19581]: cluster 2026-03-09T14:23:42.955938+0000 mgr.x (mgr.14150) 300 : cluster [DBG] pgmap v257: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:44.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:44 vm03 bash[17524]: cluster 2026-03-09T14:23:42.955938+0000 mgr.x (mgr.14150) 300 : cluster [DBG] pgmap v257: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:44.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:44 vm03 bash[17524]: cluster 2026-03-09T14:23:42.955938+0000 mgr.x (mgr.14150) 300 : cluster [DBG] pgmap v257: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:45.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:45 vm05 bash[20070]: audit 2026-03-09T14:23:43.610163+0000 mgr.x (mgr.14150) 301 : audit [DBG] from='client.24500 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:23:45.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:45 vm05 bash[20070]: audit 2026-03-09T14:23:43.610163+0000 mgr.x (mgr.14150) 301 : audit [DBG] from='client.24500 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:23:45.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:45 vm04 bash[19581]: audit 2026-03-09T14:23:43.610163+0000 mgr.x (mgr.14150) 301 : audit [DBG] from='client.24500 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:23:45.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:45 vm04 bash[19581]: audit 2026-03-09T14:23:43.610163+0000 mgr.x (mgr.14150) 301 : audit [DBG] from='client.24500 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:23:45.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:45 vm03 bash[17524]: audit 2026-03-09T14:23:43.610163+0000 mgr.x (mgr.14150) 301 : audit [DBG] from='client.24500 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:23:45.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:45 vm03 bash[17524]: audit 2026-03-09T14:23:43.610163+0000 mgr.x (mgr.14150) 301 : audit [DBG] from='client.24500 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:23:46.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:46 vm05 bash[20070]: cluster 2026-03-09T14:23:44.956215+0000 mgr.x (mgr.14150) 302 : cluster [DBG] pgmap v258: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:46.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:46 vm05 bash[20070]: cluster 2026-03-09T14:23:44.956215+0000 mgr.x (mgr.14150) 302 : cluster [DBG] pgmap v258: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:46.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:46 vm04 bash[19581]: cluster 2026-03-09T14:23:44.956215+0000 mgr.x (mgr.14150) 302 : cluster [DBG] pgmap v258: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:46.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:46 vm04 bash[19581]: cluster 2026-03-09T14:23:44.956215+0000 mgr.x (mgr.14150) 302 : cluster [DBG] pgmap v258: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:46.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:46 vm03 bash[17524]: cluster 2026-03-09T14:23:44.956215+0000 mgr.x (mgr.14150) 302 : cluster [DBG] pgmap v258: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:46.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:46 vm03 bash[17524]: cluster 2026-03-09T14:23:44.956215+0000 mgr.x (mgr.14150) 302 : cluster [DBG] pgmap v258: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:47.349 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:23:47.607 INFO:teuthology.orchestra.run.vm03.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T14:23:47.607 INFO:teuthology.orchestra.run.vm03.stdout:iscsi.iscsi.a vm03 *:5000 running (60s) 54s ago 60s 74.1M - 3.9 654f31e6858e 9da9393a4458 2026-03-09T14:23:47.607 INFO:teuthology.orchestra.run.vm03.stdout:iscsi.iscsi.b vm05 *:5000 running (59s) 54s ago 59s 47.4M - 3.9 654f31e6858e fb816bff797e 2026-03-09T14:23:47.607 INFO:teuthology.orchestra.run.vm03.stdout:mgr.x vm03 *:9283,8765 running (7m) 54s ago 7m 523M - 19.2.3-678-ge911bdeb 654f31e6858e 4b4fe7fc066c 2026-03-09T14:23:47.607 INFO:teuthology.orchestra.run.vm03.stdout:mon.a vm03 running (7m) 54s ago 7m 44.4M 2048M 19.2.3-678-ge911bdeb 654f31e6858e ffab16292423 2026-03-09T14:23:47.607 INFO:teuthology.orchestra.run.vm03.stdout:mon.b vm04 running (6m) 3m ago 6m 35.4M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 4598ea58091c 2026-03-09T14:23:47.607 INFO:teuthology.orchestra.run.vm03.stdout:mon.c vm05 running (6m) 54s ago 6m 40.6M 2048M 19.2.3-678-ge911bdeb 654f31e6858e f78314e681c4 2026-03-09T14:23:47.607 INFO:teuthology.orchestra.run.vm03.stdout:osd.0 vm03 running (5m) 54s ago 5m 38.7M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 846f85ab5a1a 2026-03-09T14:23:47.607 INFO:teuthology.orchestra.run.vm03.stdout:osd.1 vm03 running (4m) 54s ago 4m 37.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e e23a5c811a6d 2026-03-09T14:23:47.607 INFO:teuthology.orchestra.run.vm03.stdout:osd.2 vm04 running (4m) 3m ago 4m 36.0M 1517M 19.2.3-678-ge911bdeb 654f31e6858e 39437067f2d0 2026-03-09T14:23:47.607 INFO:teuthology.orchestra.run.vm03.stdout:osd.3 vm04 running (3m) 3m ago 3m 30.5M 1517M 19.2.3-678-ge911bdeb 654f31e6858e 6f21f2b07693 2026-03-09T14:23:47.607 INFO:teuthology.orchestra.run.vm03.stdout:osd.4 vm04 running (3m) 3m ago 3m 22.5M 1517M 19.2.3-678-ge911bdeb 654f31e6858e a6f5ae0207ae 2026-03-09T14:23:47.607 INFO:teuthology.orchestra.run.vm03.stdout:osd.5 vm05 running (2m) 54s ago 2m 57.5M 1517M 19.2.3-678-ge911bdeb 654f31e6858e 1a8b8533000b 2026-03-09T14:23:47.607 INFO:teuthology.orchestra.run.vm03.stdout:osd.6 vm05 running (2m) 54s ago 2m 35.6M 1517M 19.2.3-678-ge911bdeb 654f31e6858e 66a7a4af7ec9 2026-03-09T14:23:47.607 INFO:teuthology.orchestra.run.vm03.stdout:osd.7 vm05 running (97s) 54s ago 99s 31.9M 1517M 19.2.3-678-ge911bdeb 654f31e6858e 14181961d5b5 2026-03-09T14:23:47.665 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- bash -c 'ceph orch ls' 2026-03-09T14:23:48.053 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:23:47 vm03 bash[37744]: debug there is no tcmu-runner data available 2026-03-09T14:23:48.758 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:23:48 vm05 bash[38699]: debug there is no tcmu-runner data available 2026-03-09T14:23:48.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:48 vm05 bash[20070]: cluster 2026-03-09T14:23:46.956495+0000 mgr.x (mgr.14150) 303 : cluster [DBG] pgmap v259: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:48.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:48 vm05 bash[20070]: cluster 2026-03-09T14:23:46.956495+0000 mgr.x (mgr.14150) 303 : cluster [DBG] pgmap v259: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:48.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:48 vm04 bash[19581]: cluster 2026-03-09T14:23:46.956495+0000 mgr.x (mgr.14150) 303 : cluster [DBG] pgmap v259: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:48.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:48 vm04 bash[19581]: cluster 2026-03-09T14:23:46.956495+0000 mgr.x (mgr.14150) 303 : cluster [DBG] pgmap v259: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:48.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:48 vm03 bash[17524]: cluster 2026-03-09T14:23:46.956495+0000 mgr.x (mgr.14150) 303 : cluster [DBG] pgmap v259: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:48.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:48 vm03 bash[17524]: cluster 2026-03-09T14:23:46.956495+0000 mgr.x (mgr.14150) 303 : cluster [DBG] pgmap v259: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:50.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:49 vm05 bash[20070]: audit 2026-03-09T14:23:47.602751+0000 mgr.x (mgr.14150) 304 : audit [DBG] from='client.14661 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:23:50.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:49 vm05 bash[20070]: audit 2026-03-09T14:23:47.602751+0000 mgr.x (mgr.14150) 304 : audit [DBG] from='client.14661 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:23:50.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:49 vm05 bash[20070]: audit 2026-03-09T14:23:47.681723+0000 mgr.x (mgr.14150) 305 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:50.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:49 vm05 bash[20070]: audit 2026-03-09T14:23:47.681723+0000 mgr.x (mgr.14150) 305 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:50.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:49 vm04 bash[19581]: audit 2026-03-09T14:23:47.602751+0000 mgr.x (mgr.14150) 304 : audit [DBG] from='client.14661 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:23:50.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:49 vm04 bash[19581]: audit 2026-03-09T14:23:47.602751+0000 mgr.x (mgr.14150) 304 : audit [DBG] from='client.14661 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:23:50.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:49 vm04 bash[19581]: audit 2026-03-09T14:23:47.681723+0000 mgr.x (mgr.14150) 305 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:50.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:49 vm04 bash[19581]: audit 2026-03-09T14:23:47.681723+0000 mgr.x (mgr.14150) 305 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:50.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:49 vm03 bash[17524]: audit 2026-03-09T14:23:47.602751+0000 mgr.x (mgr.14150) 304 : audit [DBG] from='client.14661 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:23:50.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:49 vm03 bash[17524]: audit 2026-03-09T14:23:47.602751+0000 mgr.x (mgr.14150) 304 : audit [DBG] from='client.14661 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:23:50.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:49 vm03 bash[17524]: audit 2026-03-09T14:23:47.681723+0000 mgr.x (mgr.14150) 305 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:50.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:49 vm03 bash[17524]: audit 2026-03-09T14:23:47.681723+0000 mgr.x (mgr.14150) 305 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:51.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:50 vm05 bash[20070]: audit 2026-03-09T14:23:48.653003+0000 mgr.x (mgr.14150) 306 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:51.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:50 vm05 bash[20070]: audit 2026-03-09T14:23:48.653003+0000 mgr.x (mgr.14150) 306 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:51.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:50 vm05 bash[20070]: cluster 2026-03-09T14:23:48.956760+0000 mgr.x (mgr.14150) 307 : cluster [DBG] pgmap v260: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:51.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:50 vm05 bash[20070]: cluster 2026-03-09T14:23:48.956760+0000 mgr.x (mgr.14150) 307 : cluster [DBG] pgmap v260: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:51.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:50 vm04 bash[19581]: audit 2026-03-09T14:23:48.653003+0000 mgr.x (mgr.14150) 306 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:51.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:50 vm04 bash[19581]: audit 2026-03-09T14:23:48.653003+0000 mgr.x (mgr.14150) 306 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:51.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:50 vm04 bash[19581]: cluster 2026-03-09T14:23:48.956760+0000 mgr.x (mgr.14150) 307 : cluster [DBG] pgmap v260: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:51.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:50 vm04 bash[19581]: cluster 2026-03-09T14:23:48.956760+0000 mgr.x (mgr.14150) 307 : cluster [DBG] pgmap v260: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:51.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:50 vm03 bash[17524]: audit 2026-03-09T14:23:48.653003+0000 mgr.x (mgr.14150) 306 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:51.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:50 vm03 bash[17524]: audit 2026-03-09T14:23:48.653003+0000 mgr.x (mgr.14150) 306 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:51.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:50 vm03 bash[17524]: cluster 2026-03-09T14:23:48.956760+0000 mgr.x (mgr.14150) 307 : cluster [DBG] pgmap v260: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:51.054 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:50 vm03 bash[17524]: cluster 2026-03-09T14:23:48.956760+0000 mgr.x (mgr.14150) 307 : cluster [DBG] pgmap v260: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:51.365 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:23:51.625 INFO:teuthology.orchestra.run.vm03.stdout:NAME PORTS RUNNING REFRESHED AGE PLACEMENT 2026-03-09T14:23:51.625 INFO:teuthology.orchestra.run.vm03.stdout:iscsi.datapool ?:5000 2/2 58s ago 65s vm03=iscsi.a;vm05=iscsi.b;count:2 2026-03-09T14:23:51.625 INFO:teuthology.orchestra.run.vm03.stdout:mgr 1/1 58s ago 5m vm03=x;count:1 2026-03-09T14:23:51.625 INFO:teuthology.orchestra.run.vm03.stdout:mon 3/3 3m ago 6m vm03:192.168.123.103=a;vm04:192.168.123.104=b;vm05:192.168.123.105=c;count:3 2026-03-09T14:23:51.625 INFO:teuthology.orchestra.run.vm03.stdout:osd 8 3m ago - 2026-03-09T14:23:51.639 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:51 vm03 bash[17524]: cluster 2026-03-09T14:23:50.957119+0000 mgr.x (mgr.14150) 308 : cluster [DBG] pgmap v261: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:51.639 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:51 vm03 bash[17524]: cluster 2026-03-09T14:23:50.957119+0000 mgr.x (mgr.14150) 308 : cluster [DBG] pgmap v261: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:51.682 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- bash -c 'ceph orch host ls' 2026-03-09T14:23:52.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:51 vm05 bash[20070]: cluster 2026-03-09T14:23:50.957119+0000 mgr.x (mgr.14150) 308 : cluster [DBG] pgmap v261: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:52.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:51 vm05 bash[20070]: cluster 2026-03-09T14:23:50.957119+0000 mgr.x (mgr.14150) 308 : cluster [DBG] pgmap v261: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:52.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:51 vm04 bash[19581]: cluster 2026-03-09T14:23:50.957119+0000 mgr.x (mgr.14150) 308 : cluster [DBG] pgmap v261: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:52.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:51 vm04 bash[19581]: cluster 2026-03-09T14:23:50.957119+0000 mgr.x (mgr.14150) 308 : cluster [DBG] pgmap v261: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:52.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:52 vm03 bash[17524]: audit 2026-03-09T14:23:51.623417+0000 mgr.x (mgr.14150) 309 : audit [DBG] from='client.14667 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:23:52.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:52 vm03 bash[17524]: audit 2026-03-09T14:23:51.623417+0000 mgr.x (mgr.14150) 309 : audit [DBG] from='client.14667 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:23:53.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:52 vm05 bash[20070]: audit 2026-03-09T14:23:51.623417+0000 mgr.x (mgr.14150) 309 : audit [DBG] from='client.14667 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:23:53.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:52 vm05 bash[20070]: audit 2026-03-09T14:23:51.623417+0000 mgr.x (mgr.14150) 309 : audit [DBG] from='client.14667 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:23:53.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:52 vm04 bash[19581]: audit 2026-03-09T14:23:51.623417+0000 mgr.x (mgr.14150) 309 : audit [DBG] from='client.14667 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:23:53.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:52 vm04 bash[19581]: audit 2026-03-09T14:23:51.623417+0000 mgr.x (mgr.14150) 309 : audit [DBG] from='client.14667 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:23:54.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:53 vm05 bash[20070]: cluster 2026-03-09T14:23:52.957402+0000 mgr.x (mgr.14150) 310 : cluster [DBG] pgmap v262: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:54.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:53 vm05 bash[20070]: cluster 2026-03-09T14:23:52.957402+0000 mgr.x (mgr.14150) 310 : cluster [DBG] pgmap v262: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:54.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:53 vm05 bash[20070]: audit 2026-03-09T14:23:53.399567+0000 mon.a (mon.0) 732 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:23:54.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:53 vm05 bash[20070]: audit 2026-03-09T14:23:53.399567+0000 mon.a (mon.0) 732 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:23:54.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:53 vm04 bash[19581]: cluster 2026-03-09T14:23:52.957402+0000 mgr.x (mgr.14150) 310 : cluster [DBG] pgmap v262: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:54.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:53 vm04 bash[19581]: cluster 2026-03-09T14:23:52.957402+0000 mgr.x (mgr.14150) 310 : cluster [DBG] pgmap v262: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:54.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:53 vm04 bash[19581]: audit 2026-03-09T14:23:53.399567+0000 mon.a (mon.0) 732 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:23:54.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:53 vm04 bash[19581]: audit 2026-03-09T14:23:53.399567+0000 mon.a (mon.0) 732 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:23:54.054 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:53 vm03 bash[17524]: cluster 2026-03-09T14:23:52.957402+0000 mgr.x (mgr.14150) 310 : cluster [DBG] pgmap v262: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:54.054 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:53 vm03 bash[17524]: cluster 2026-03-09T14:23:52.957402+0000 mgr.x (mgr.14150) 310 : cluster [DBG] pgmap v262: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:54.054 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:53 vm03 bash[17524]: audit 2026-03-09T14:23:53.399567+0000 mon.a (mon.0) 732 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:23:54.054 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:53 vm03 bash[17524]: audit 2026-03-09T14:23:53.399567+0000 mon.a (mon.0) 732 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:23:55.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:54 vm05 bash[20070]: audit 2026-03-09T14:23:53.767067+0000 mon.a (mon.0) 733 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:23:55.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:54 vm05 bash[20070]: audit 2026-03-09T14:23:53.767067+0000 mon.a (mon.0) 733 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:23:55.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:54 vm05 bash[20070]: audit 2026-03-09T14:23:53.767725+0000 mon.a (mon.0) 734 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:23:55.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:54 vm05 bash[20070]: audit 2026-03-09T14:23:53.767725+0000 mon.a (mon.0) 734 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:23:55.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:54 vm05 bash[20070]: audit 2026-03-09T14:23:53.834930+0000 mon.a (mon.0) 735 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:23:55.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:54 vm05 bash[20070]: audit 2026-03-09T14:23:53.834930+0000 mon.a (mon.0) 735 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:23:55.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:54 vm04 bash[19581]: audit 2026-03-09T14:23:53.767067+0000 mon.a (mon.0) 733 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:23:55.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:54 vm04 bash[19581]: audit 2026-03-09T14:23:53.767067+0000 mon.a (mon.0) 733 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:23:55.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:54 vm04 bash[19581]: audit 2026-03-09T14:23:53.767725+0000 mon.a (mon.0) 734 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:23:55.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:54 vm04 bash[19581]: audit 2026-03-09T14:23:53.767725+0000 mon.a (mon.0) 734 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:23:55.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:54 vm04 bash[19581]: audit 2026-03-09T14:23:53.834930+0000 mon.a (mon.0) 735 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:23:55.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:54 vm04 bash[19581]: audit 2026-03-09T14:23:53.834930+0000 mon.a (mon.0) 735 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:23:55.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:54 vm03 bash[17524]: audit 2026-03-09T14:23:53.767067+0000 mon.a (mon.0) 733 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:23:55.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:54 vm03 bash[17524]: audit 2026-03-09T14:23:53.767067+0000 mon.a (mon.0) 733 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:23:55.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:54 vm03 bash[17524]: audit 2026-03-09T14:23:53.767725+0000 mon.a (mon.0) 734 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:23:55.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:54 vm03 bash[17524]: audit 2026-03-09T14:23:53.767725+0000 mon.a (mon.0) 734 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:23:55.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:54 vm03 bash[17524]: audit 2026-03-09T14:23:53.834930+0000 mon.a (mon.0) 735 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:23:55.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:54 vm03 bash[17524]: audit 2026-03-09T14:23:53.834930+0000 mon.a (mon.0) 735 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:23:55.381 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:23:55.621 INFO:teuthology.orchestra.run.vm03.stdout:HOST ADDR LABELS STATUS 2026-03-09T14:23:55.621 INFO:teuthology.orchestra.run.vm03.stdout:vm03 192.168.123.103 2026-03-09T14:23:55.621 INFO:teuthology.orchestra.run.vm03.stdout:vm04 192.168.123.104 2026-03-09T14:23:55.621 INFO:teuthology.orchestra.run.vm03.stdout:vm05 192.168.123.105 2026-03-09T14:23:55.621 INFO:teuthology.orchestra.run.vm03.stdout:3 hosts in cluster 2026-03-09T14:23:55.645 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:55 vm03 bash[17524]: cluster 2026-03-09T14:23:54.957662+0000 mgr.x (mgr.14150) 311 : cluster [DBG] pgmap v263: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:55.645 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:55 vm03 bash[17524]: cluster 2026-03-09T14:23:54.957662+0000 mgr.x (mgr.14150) 311 : cluster [DBG] pgmap v263: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:55.685 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- bash -c 'ceph orch device ls' 2026-03-09T14:23:56.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:55 vm05 bash[20070]: cluster 2026-03-09T14:23:54.957662+0000 mgr.x (mgr.14150) 311 : cluster [DBG] pgmap v263: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:56.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:55 vm05 bash[20070]: cluster 2026-03-09T14:23:54.957662+0000 mgr.x (mgr.14150) 311 : cluster [DBG] pgmap v263: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:56.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:55 vm04 bash[19581]: cluster 2026-03-09T14:23:54.957662+0000 mgr.x (mgr.14150) 311 : cluster [DBG] pgmap v263: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:56.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:55 vm04 bash[19581]: cluster 2026-03-09T14:23:54.957662+0000 mgr.x (mgr.14150) 311 : cluster [DBG] pgmap v263: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:23:56.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:56 vm03 bash[17524]: audit 2026-03-09T14:23:55.620649+0000 mgr.x (mgr.14150) 312 : audit [DBG] from='client.14673 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:23:56.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:56 vm03 bash[17524]: audit 2026-03-09T14:23:55.620649+0000 mgr.x (mgr.14150) 312 : audit [DBG] from='client.14673 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:23:57.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:56 vm05 bash[20070]: audit 2026-03-09T14:23:55.620649+0000 mgr.x (mgr.14150) 312 : audit [DBG] from='client.14673 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:23:57.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:56 vm05 bash[20070]: audit 2026-03-09T14:23:55.620649+0000 mgr.x (mgr.14150) 312 : audit [DBG] from='client.14673 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:23:57.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:56 vm04 bash[19581]: audit 2026-03-09T14:23:55.620649+0000 mgr.x (mgr.14150) 312 : audit [DBG] from='client.14673 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:23:57.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:56 vm04 bash[19581]: audit 2026-03-09T14:23:55.620649+0000 mgr.x (mgr.14150) 312 : audit [DBG] from='client.14673 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:23:58.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:57 vm03 bash[17524]: cluster 2026-03-09T14:23:56.958003+0000 mgr.x (mgr.14150) 313 : cluster [DBG] pgmap v264: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:58.054 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:57 vm03 bash[17524]: cluster 2026-03-09T14:23:56.958003+0000 mgr.x (mgr.14150) 313 : cluster [DBG] pgmap v264: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:58.054 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:23:57 vm03 bash[37744]: debug there is no tcmu-runner data available 2026-03-09T14:23:58.258 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:57 vm05 bash[20070]: cluster 2026-03-09T14:23:56.958003+0000 mgr.x (mgr.14150) 313 : cluster [DBG] pgmap v264: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:58.258 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:57 vm05 bash[20070]: cluster 2026-03-09T14:23:56.958003+0000 mgr.x (mgr.14150) 313 : cluster [DBG] pgmap v264: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:58.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:57 vm04 bash[19581]: cluster 2026-03-09T14:23:56.958003+0000 mgr.x (mgr.14150) 313 : cluster [DBG] pgmap v264: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:58.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:57 vm04 bash[19581]: cluster 2026-03-09T14:23:56.958003+0000 mgr.x (mgr.14150) 313 : cluster [DBG] pgmap v264: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:23:59.008 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:23:58 vm05 bash[38699]: debug there is no tcmu-runner data available 2026-03-09T14:23:59.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:58 vm05 bash[20070]: audit 2026-03-09T14:23:57.692222+0000 mgr.x (mgr.14150) 314 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:59.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:58 vm05 bash[20070]: audit 2026-03-09T14:23:57.692222+0000 mgr.x (mgr.14150) 314 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:59.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:58 vm04 bash[19581]: audit 2026-03-09T14:23:57.692222+0000 mgr.x (mgr.14150) 314 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:59.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:58 vm04 bash[19581]: audit 2026-03-09T14:23:57.692222+0000 mgr.x (mgr.14150) 314 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:59.303 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:58 vm03 bash[17524]: audit 2026-03-09T14:23:57.692222+0000 mgr.x (mgr.14150) 314 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:59.303 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:58 vm03 bash[17524]: audit 2026-03-09T14:23:57.692222+0000 mgr.x (mgr.14150) 314 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:23:59.399 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:23:59.673 INFO:teuthology.orchestra.run.vm03.stdout:HOST PATH TYPE DEVICE ID SIZE AVAILABLE REFRESHED REJECT REASONS 2026-03-09T14:23:59.673 INFO:teuthology.orchestra.run.vm03.stdout:vm03 /dev/sr0 hdd QEMU_DVD-ROM_QM00003 366k No 4m ago Has a FileSystem, Insufficient space (<5GB) 2026-03-09T14:23:59.673 INFO:teuthology.orchestra.run.vm03.stdout:vm03 /dev/vdb hdd DWNBRSTVMM03001 20.0G Yes 4m ago 2026-03-09T14:23:59.673 INFO:teuthology.orchestra.run.vm03.stdout:vm03 /dev/vdc hdd DWNBRSTVMM03002 20.0G Yes 4m ago 2026-03-09T14:23:59.673 INFO:teuthology.orchestra.run.vm03.stdout:vm03 /dev/vdd hdd DWNBRSTVMM03003 20.0G No 4m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-09T14:23:59.674 INFO:teuthology.orchestra.run.vm03.stdout:vm03 /dev/vde hdd DWNBRSTVMM03004 20.0G No 4m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-09T14:23:59.674 INFO:teuthology.orchestra.run.vm03.stdout:vm04 /dev/sr0 hdd QEMU_DVD-ROM_QM00003 366k No 3m ago Has a FileSystem, Insufficient space (<5GB) 2026-03-09T14:23:59.674 INFO:teuthology.orchestra.run.vm03.stdout:vm04 /dev/vdb hdd DWNBRSTVMM04001 20.0G Yes 3m ago 2026-03-09T14:23:59.674 INFO:teuthology.orchestra.run.vm03.stdout:vm04 /dev/vdc hdd DWNBRSTVMM04002 20.0G No 3m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-09T14:23:59.674 INFO:teuthology.orchestra.run.vm03.stdout:vm04 /dev/vdd hdd DWNBRSTVMM04003 20.0G No 3m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-09T14:23:59.674 INFO:teuthology.orchestra.run.vm03.stdout:vm04 /dev/vde hdd DWNBRSTVMM04004 20.0G No 3m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-09T14:23:59.674 INFO:teuthology.orchestra.run.vm03.stdout:vm05 /dev/sr0 hdd QEMU_DVD-ROM_QM00003 366k No 98s ago Has a FileSystem, Insufficient space (<5GB) 2026-03-09T14:23:59.674 INFO:teuthology.orchestra.run.vm03.stdout:vm05 /dev/vdb hdd DWNBRSTVMM05001 20.0G Yes 98s ago 2026-03-09T14:23:59.674 INFO:teuthology.orchestra.run.vm03.stdout:vm05 /dev/vdc hdd DWNBRSTVMM05002 20.0G No 98s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-09T14:23:59.674 INFO:teuthology.orchestra.run.vm03.stdout:vm05 /dev/vdd hdd DWNBRSTVMM05003 20.0G No 98s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-09T14:23:59.674 INFO:teuthology.orchestra.run.vm03.stdout:vm05 /dev/vde hdd DWNBRSTVMM05004 20.0G No 98s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-09T14:23:59.725 INFO:teuthology.run_tasks:Running task install... 2026-03-09T14:23:59.727 DEBUG:teuthology.task.install:project ceph 2026-03-09T14:23:59.727 DEBUG:teuthology.task.install:INSTALL overrides: {'ceph': {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'}, 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-09T14:23:59.727 DEBUG:teuthology.task.install:config {'extra_system_packages': {'deb': ['open-iscsi', 'multipath-tools', 'python3-xmltodict', 'python3-jmespath'], 'rpm': ['iscsi-initiator-utils', 'device-mapper-multipath', 'bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}, 'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'} 2026-03-09T14:23:59.727 INFO:teuthology.task.install:Using flavor: default 2026-03-09T14:23:59.729 DEBUG:teuthology.task.install:Package list is: {'deb': ['ceph', 'cephadm', 'ceph-mds', 'ceph-mgr', 'ceph-common', 'ceph-fuse', 'ceph-test', 'ceph-volume', 'radosgw', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'libcephfs2', 'libcephfs-dev', 'librados2', 'librbd1', 'rbd-fuse'], 'rpm': ['ceph-radosgw', 'ceph-test', 'ceph', 'ceph-base', 'cephadm', 'ceph-immutable-object-cache', 'ceph-mgr', 'ceph-mgr-dashboard', 'ceph-mgr-diskprediction-local', 'ceph-mgr-rook', 'ceph-mgr-cephadm', 'ceph-fuse', 'ceph-volume', 'librados-devel', 'libcephfs2', 'libcephfs-devel', 'librados2', 'librbd1', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'rbd-fuse', 'rbd-mirror', 'rbd-nbd']} 2026-03-09T14:23:59.729 INFO:teuthology.task.install:extra packages: [] 2026-03-09T14:23:59.729 DEBUG:teuthology.orchestra.run.vm03:> sudo apt-key list | grep Ceph 2026-03-09T14:23:59.729 DEBUG:teuthology.orchestra.run.vm04:> sudo apt-key list | grep Ceph 2026-03-09T14:23:59.729 DEBUG:teuthology.orchestra.run.vm05:> sudo apt-key list | grep Ceph 2026-03-09T14:23:59.769 INFO:teuthology.orchestra.run.vm05.stderr:Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). 2026-03-09T14:23:59.770 INFO:teuthology.orchestra.run.vm04.stderr:Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). 2026-03-09T14:23:59.771 INFO:teuthology.orchestra.run.vm03.stderr:Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). 2026-03-09T14:23:59.787 INFO:teuthology.orchestra.run.vm05.stdout:uid [ unknown] Ceph automated package build (Ceph automated package build) 2026-03-09T14:23:59.787 INFO:teuthology.orchestra.run.vm05.stdout:uid [ unknown] Ceph.com (release key) 2026-03-09T14:23:59.788 INFO:teuthology.task.install.deb:Installing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on remote deb x86_64 2026-03-09T14:23:59.788 INFO:teuthology.task.install.deb:Installing system (non-project) packages: open-iscsi, multipath-tools, python3-xmltodict, python3-jmespath on remote deb x86_64 2026-03-09T14:23:59.788 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T14:23:59.849 INFO:teuthology.orchestra.run.vm04.stdout:uid [ unknown] Ceph automated package build (Ceph automated package build) 2026-03-09T14:23:59.850 INFO:teuthology.orchestra.run.vm04.stdout:uid [ unknown] Ceph.com (release key) 2026-03-09T14:23:59.850 INFO:teuthology.orchestra.run.vm03.stdout:uid [ unknown] Ceph automated package build (Ceph automated package build) 2026-03-09T14:23:59.850 INFO:teuthology.orchestra.run.vm03.stdout:uid [ unknown] Ceph.com (release key) 2026-03-09T14:23:59.850 INFO:teuthology.task.install.deb:Installing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on remote deb x86_64 2026-03-09T14:23:59.850 INFO:teuthology.task.install.deb:Installing system (non-project) packages: open-iscsi, multipath-tools, python3-xmltodict, python3-jmespath on remote deb x86_64 2026-03-09T14:23:59.850 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T14:23:59.850 INFO:teuthology.task.install.deb:Installing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on remote deb x86_64 2026-03-09T14:23:59.850 INFO:teuthology.task.install.deb:Installing system (non-project) packages: open-iscsi, multipath-tools, python3-xmltodict, python3-jmespath on remote deb x86_64 2026-03-09T14:23:59.850 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T14:24:00.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:59 vm05 bash[20070]: audit 2026-03-09T14:23:58.659897+0000 mgr.x (mgr.14150) 315 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:00.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:59 vm05 bash[20070]: audit 2026-03-09T14:23:58.659897+0000 mgr.x (mgr.14150) 315 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:00.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:59 vm05 bash[20070]: cluster 2026-03-09T14:23:58.958304+0000 mgr.x (mgr.14150) 316 : cluster [DBG] pgmap v265: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:00.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:23:59 vm05 bash[20070]: cluster 2026-03-09T14:23:58.958304+0000 mgr.x (mgr.14150) 316 : cluster [DBG] pgmap v265: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:00.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:59 vm04 bash[19581]: audit 2026-03-09T14:23:58.659897+0000 mgr.x (mgr.14150) 315 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:00.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:59 vm04 bash[19581]: audit 2026-03-09T14:23:58.659897+0000 mgr.x (mgr.14150) 315 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:00.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:59 vm04 bash[19581]: cluster 2026-03-09T14:23:58.958304+0000 mgr.x (mgr.14150) 316 : cluster [DBG] pgmap v265: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:00.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:23:59 vm04 bash[19581]: cluster 2026-03-09T14:23:58.958304+0000 mgr.x (mgr.14150) 316 : cluster [DBG] pgmap v265: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:00.054 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:59 vm03 bash[17524]: audit 2026-03-09T14:23:58.659897+0000 mgr.x (mgr.14150) 315 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:00.054 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:59 vm03 bash[17524]: audit 2026-03-09T14:23:58.659897+0000 mgr.x (mgr.14150) 315 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:00.054 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:59 vm03 bash[17524]: cluster 2026-03-09T14:23:58.958304+0000 mgr.x (mgr.14150) 316 : cluster [DBG] pgmap v265: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:00.054 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:23:59 vm03 bash[17524]: cluster 2026-03-09T14:23:58.958304+0000 mgr.x (mgr.14150) 316 : cluster [DBG] pgmap v265: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:00.422 INFO:teuthology.task.install.deb:Pulling from https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/ 2026-03-09T14:24:00.423 INFO:teuthology.task.install.deb:Package version is 19.2.3-678-ge911bdeb-1jammy 2026-03-09T14:24:00.512 INFO:teuthology.task.install.deb:Pulling from https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/ 2026-03-09T14:24:00.512 INFO:teuthology.task.install.deb:Package version is 19.2.3-678-ge911bdeb-1jammy 2026-03-09T14:24:00.512 INFO:teuthology.task.install.deb:Pulling from https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/ 2026-03-09T14:24:00.512 INFO:teuthology.task.install.deb:Package version is 19.2.3-678-ge911bdeb-1jammy 2026-03-09T14:24:01.016 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T14:24:01.016 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/apt/sources.list.d/ceph.list 2026-03-09T14:24:01.018 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T14:24:01.018 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/apt/sources.list.d/ceph.list 2026-03-09T14:24:01.026 DEBUG:teuthology.orchestra.run.vm05:> sudo apt-get update 2026-03-09T14:24:01.027 DEBUG:teuthology.orchestra.run.vm03:> sudo apt-get update 2026-03-09T14:24:01.057 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T14:24:01.057 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/apt/sources.list.d/ceph.list 2026-03-09T14:24:01.065 DEBUG:teuthology.orchestra.run.vm04:> sudo apt-get update 2026-03-09T14:24:01.141 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:00 vm04 bash[19581]: audit 2026-03-09T14:23:59.672024+0000 mgr.x (mgr.14150) 317 : audit [DBG] from='client.14679 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:24:01.141 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:00 vm04 bash[19581]: audit 2026-03-09T14:23:59.672024+0000 mgr.x (mgr.14150) 317 : audit [DBG] from='client.14679 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:24:01.191 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:00 vm05 bash[20070]: audit 2026-03-09T14:23:59.672024+0000 mgr.x (mgr.14150) 317 : audit [DBG] from='client.14679 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:24:01.192 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:00 vm05 bash[20070]: audit 2026-03-09T14:23:59.672024+0000 mgr.x (mgr.14150) 317 : audit [DBG] from='client.14679 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:24:01.208 INFO:teuthology.orchestra.run.vm05.stdout:Hit:1 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-09T14:24:01.208 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:00 vm03 bash[17524]: audit 2026-03-09T14:23:59.672024+0000 mgr.x (mgr.14150) 317 : audit [DBG] from='client.14679 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:24:01.208 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:00 vm03 bash[17524]: audit 2026-03-09T14:23:59.672024+0000 mgr.x (mgr.14150) 317 : audit [DBG] from='client.14679 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:24:01.216 INFO:teuthology.orchestra.run.vm03.stdout:Hit:1 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-09T14:24:01.216 INFO:teuthology.orchestra.run.vm03.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-09T14:24:01.225 INFO:teuthology.orchestra.run.vm03.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-09T14:24:01.232 INFO:teuthology.orchestra.run.vm03.stdout:Hit:4 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-09T14:24:01.369 INFO:teuthology.orchestra.run.vm04.stdout:Hit:1 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-09T14:24:01.374 INFO:teuthology.orchestra.run.vm04.stdout:Hit:2 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-09T14:24:01.401 INFO:teuthology.orchestra.run.vm04.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-09T14:24:01.436 INFO:teuthology.orchestra.run.vm04.stdout:Hit:4 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-09T14:24:01.585 INFO:teuthology.orchestra.run.vm05.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-09T14:24:01.675 INFO:teuthology.orchestra.run.vm03.stdout:Ign:5 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy InRelease 2026-03-09T14:24:01.688 INFO:teuthology.orchestra.run.vm05.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-09T14:24:01.695 INFO:teuthology.orchestra.run.vm04.stdout:Ign:5 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy InRelease 2026-03-09T14:24:01.700 INFO:teuthology.orchestra.run.vm05.stdout:Ign:4 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy InRelease 2026-03-09T14:24:01.790 INFO:teuthology.orchestra.run.vm03.stdout:Get:6 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release [7662 B] 2026-03-09T14:24:01.792 INFO:teuthology.orchestra.run.vm05.stdout:Hit:5 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-09T14:24:01.811 INFO:teuthology.orchestra.run.vm04.stdout:Get:6 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release [7662 B] 2026-03-09T14:24:01.820 INFO:teuthology.orchestra.run.vm05.stdout:Get:6 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release [7662 B] 2026-03-09T14:24:01.904 INFO:teuthology.orchestra.run.vm03.stdout:Ign:7 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release.gpg 2026-03-09T14:24:01.928 INFO:teuthology.orchestra.run.vm04.stdout:Ign:7 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release.gpg 2026-03-09T14:24:01.940 INFO:teuthology.orchestra.run.vm05.stdout:Ign:7 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release.gpg 2026-03-09T14:24:02.018 INFO:teuthology.orchestra.run.vm03.stdout:Get:8 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 Packages [18.1 kB] 2026-03-09T14:24:02.044 INFO:teuthology.orchestra.run.vm04.stdout:Get:8 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 Packages [18.1 kB] 2026-03-09T14:24:02.060 INFO:teuthology.orchestra.run.vm05.stdout:Get:8 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 Packages [18.1 kB] 2026-03-09T14:24:02.089 INFO:teuthology.orchestra.run.vm03.stdout:Fetched 25.8 kB in 1s (28.3 kB/s) 2026-03-09T14:24:02.140 INFO:teuthology.orchestra.run.vm05.stdout:Fetched 25.8 kB in 1s (26.8 kB/s) 2026-03-09T14:24:02.234 INFO:teuthology.orchestra.run.vm04.stdout:Fetched 25.8 kB in 1s (25.5 kB/s) 2026-03-09T14:24:02.258 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:01 vm05 bash[20070]: cluster 2026-03-09T14:24:00.958588+0000 mgr.x (mgr.14150) 318 : cluster [DBG] pgmap v266: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:02.258 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:01 vm05 bash[20070]: cluster 2026-03-09T14:24:00.958588+0000 mgr.x (mgr.14150) 318 : cluster [DBG] pgmap v266: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:02.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:01 vm04 bash[19581]: cluster 2026-03-09T14:24:00.958588+0000 mgr.x (mgr.14150) 318 : cluster [DBG] pgmap v266: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:02.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:01 vm04 bash[19581]: cluster 2026-03-09T14:24:00.958588+0000 mgr.x (mgr.14150) 318 : cluster [DBG] pgmap v266: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:02.304 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:01 vm03 bash[17524]: cluster 2026-03-09T14:24:00.958588+0000 mgr.x (mgr.14150) 318 : cluster [DBG] pgmap v266: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:02.304 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:01 vm03 bash[17524]: cluster 2026-03-09T14:24:00.958588+0000 mgr.x (mgr.14150) 318 : cluster [DBG] pgmap v266: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:02.829 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T14:24:02.833 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-09T14:24:02.843 DEBUG:teuthology.orchestra.run.vm03:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=19.2.3-678-ge911bdeb-1jammy cephadm=19.2.3-678-ge911bdeb-1jammy ceph-mds=19.2.3-678-ge911bdeb-1jammy ceph-mgr=19.2.3-678-ge911bdeb-1jammy ceph-common=19.2.3-678-ge911bdeb-1jammy ceph-fuse=19.2.3-678-ge911bdeb-1jammy ceph-test=19.2.3-678-ge911bdeb-1jammy ceph-volume=19.2.3-678-ge911bdeb-1jammy radosgw=19.2.3-678-ge911bdeb-1jammy python3-rados=19.2.3-678-ge911bdeb-1jammy python3-rgw=19.2.3-678-ge911bdeb-1jammy python3-cephfs=19.2.3-678-ge911bdeb-1jammy python3-rbd=19.2.3-678-ge911bdeb-1jammy libcephfs2=19.2.3-678-ge911bdeb-1jammy libcephfs-dev=19.2.3-678-ge911bdeb-1jammy librados2=19.2.3-678-ge911bdeb-1jammy librbd1=19.2.3-678-ge911bdeb-1jammy rbd-fuse=19.2.3-678-ge911bdeb-1jammy 2026-03-09T14:24:02.846 DEBUG:teuthology.orchestra.run.vm05:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=19.2.3-678-ge911bdeb-1jammy cephadm=19.2.3-678-ge911bdeb-1jammy ceph-mds=19.2.3-678-ge911bdeb-1jammy ceph-mgr=19.2.3-678-ge911bdeb-1jammy ceph-common=19.2.3-678-ge911bdeb-1jammy ceph-fuse=19.2.3-678-ge911bdeb-1jammy ceph-test=19.2.3-678-ge911bdeb-1jammy ceph-volume=19.2.3-678-ge911bdeb-1jammy radosgw=19.2.3-678-ge911bdeb-1jammy python3-rados=19.2.3-678-ge911bdeb-1jammy python3-rgw=19.2.3-678-ge911bdeb-1jammy python3-cephfs=19.2.3-678-ge911bdeb-1jammy python3-rbd=19.2.3-678-ge911bdeb-1jammy libcephfs2=19.2.3-678-ge911bdeb-1jammy libcephfs-dev=19.2.3-678-ge911bdeb-1jammy librados2=19.2.3-678-ge911bdeb-1jammy librbd1=19.2.3-678-ge911bdeb-1jammy rbd-fuse=19.2.3-678-ge911bdeb-1jammy 2026-03-09T14:24:02.882 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T14:24:02.883 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-09T14:24:03.002 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T14:24:03.016 DEBUG:teuthology.orchestra.run.vm04:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=19.2.3-678-ge911bdeb-1jammy cephadm=19.2.3-678-ge911bdeb-1jammy ceph-mds=19.2.3-678-ge911bdeb-1jammy ceph-mgr=19.2.3-678-ge911bdeb-1jammy ceph-common=19.2.3-678-ge911bdeb-1jammy ceph-fuse=19.2.3-678-ge911bdeb-1jammy ceph-test=19.2.3-678-ge911bdeb-1jammy ceph-volume=19.2.3-678-ge911bdeb-1jammy radosgw=19.2.3-678-ge911bdeb-1jammy python3-rados=19.2.3-678-ge911bdeb-1jammy python3-rgw=19.2.3-678-ge911bdeb-1jammy python3-cephfs=19.2.3-678-ge911bdeb-1jammy python3-rbd=19.2.3-678-ge911bdeb-1jammy libcephfs2=19.2.3-678-ge911bdeb-1jammy libcephfs-dev=19.2.3-678-ge911bdeb-1jammy librados2=19.2.3-678-ge911bdeb-1jammy librbd1=19.2.3-678-ge911bdeb-1jammy rbd-fuse=19.2.3-678-ge911bdeb-1jammy 2026-03-09T14:24:03.055 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T14:24:03.055 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T14:24:03.097 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T14:24:03.104 INFO:teuthology.orchestra.run.vm05.stdout:Building dependency tree... 2026-03-09T14:24:03.105 INFO:teuthology.orchestra.run.vm05.stdout:Reading state information... 2026-03-09T14:24:03.164 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:24:03.164 INFO:teuthology.orchestra.run.vm03.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T14:24:03.165 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-09T14:24:03.165 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:24:03.165 INFO:teuthology.orchestra.run.vm03.stdout:The following additional packages will be installed: 2026-03-09T14:24:03.165 INFO:teuthology.orchestra.run.vm03.stdout: ceph-base ceph-mgr-cephadm ceph-mgr-dashboard ceph-mgr-diskprediction-local 2026-03-09T14:24:03.165 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-k8sevents ceph-mgr-modules-core ceph-mon ceph-osd jq 2026-03-09T14:24:03.165 INFO:teuthology.orchestra.run.vm03.stdout: libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T14:24:03.165 INFO:teuthology.orchestra.run.vm03.stdout: liboath0 libonig5 libpcre2-16-0 libqt5core5a libqt5dbus5 libqt5network5 2026-03-09T14:24:03.166 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph 2026-03-09T14:24:03.166 INFO:teuthology.orchestra.run.vm03.stdout: libthrift-0.16.0 lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T14:24:03.166 INFO:teuthology.orchestra.run.vm03.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T14:24:03.166 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T14:24:03.166 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-09T14:24:03.166 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T14:24:03.166 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T14:24:03.166 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T14:24:03.166 INFO:teuthology.orchestra.run.vm03.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-09T14:24:03.166 INFO:teuthology.orchestra.run.vm03.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-09T14:24:03.166 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-pytest python3-repoze.lru 2026-03-09T14:24:03.166 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T14:24:03.166 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T14:24:03.166 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T14:24:03.166 INFO:teuthology.orchestra.run.vm03.stdout: python3-toml python3-waitress python3-wcwidth python3-webob 2026-03-09T14:24:03.166 INFO:teuthology.orchestra.run.vm03.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:24:03.166 INFO:teuthology.orchestra.run.vm03.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-09T14:24:03.167 INFO:teuthology.orchestra.run.vm03.stdout:Suggested packages: 2026-03-09T14:24:03.167 INFO:teuthology.orchestra.run.vm03.stdout: python3-influxdb readline-doc python3-beaker python-mako-doc 2026-03-09T14:24:03.167 INFO:teuthology.orchestra.run.vm03.stdout: python-natsort-doc httpd-wsgi libapache2-mod-python libapache2-mod-scgi 2026-03-09T14:24:03.167 INFO:teuthology.orchestra.run.vm03.stdout: libjs-mochikit python-pecan-doc python-psutil-doc subversion 2026-03-09T14:24:03.167 INFO:teuthology.orchestra.run.vm03.stdout: python-pygments-doc ttf-bitstream-vera python-pyinotify-doc python3-dap 2026-03-09T14:24:03.167 INFO:teuthology.orchestra.run.vm03.stdout: python-sklearn-doc ipython3 python-waitress-doc python-webob-doc 2026-03-09T14:24:03.167 INFO:teuthology.orchestra.run.vm03.stdout: python-webtest-doc python-werkzeug-doc python3-watchdog gsmartcontrol 2026-03-09T14:24:03.167 INFO:teuthology.orchestra.run.vm03.stdout: smart-notifier mailx | mailutils 2026-03-09T14:24:03.167 INFO:teuthology.orchestra.run.vm03.stdout:Recommended packages: 2026-03-09T14:24:03.167 INFO:teuthology.orchestra.run.vm03.stdout: btrfs-tools 2026-03-09T14:24:03.203 INFO:teuthology.orchestra.run.vm03.stdout:The following NEW packages will be installed: 2026-03-09T14:24:03.203 INFO:teuthology.orchestra.run.vm03.stdout: ceph ceph-base ceph-common ceph-fuse ceph-mds ceph-mgr ceph-mgr-cephadm 2026-03-09T14:24:03.203 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-k8sevents 2026-03-09T14:24:03.203 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core ceph-mon ceph-osd ceph-test ceph-volume cephadm jq 2026-03-09T14:24:03.203 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs-dev libcephfs2 libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 2026-03-09T14:24:03.203 INFO:teuthology.orchestra.run.vm03.stdout: liblua5.3-dev libnbd0 liboath0 libonig5 libpcre2-16-0 libqt5core5a 2026-03-09T14:24:03.203 INFO:teuthology.orchestra.run.vm03.stdout: libqt5dbus5 libqt5network5 libradosstriper1 librdkafka1 libreadline-dev 2026-03-09T14:24:03.203 INFO:teuthology.orchestra.run.vm03.stdout: librgw2 libsqlite3-mod-ceph libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-09T14:24:03.203 INFO:teuthology.orchestra.run.vm03.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T14:24:03.203 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T14:24:03.203 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-argparse python3-ceph-common python3-cephfs python3-cheroot 2026-03-09T14:24:03.203 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-09T14:24:03.203 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T14:24:03.203 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T14:24:03.203 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T14:24:03.203 INFO:teuthology.orchestra.run.vm03.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-09T14:24:03.203 INFO:teuthology.orchestra.run.vm03.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-09T14:24:03.203 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-pytest python3-rados python3-rbd 2026-03-09T14:24:03.203 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze.lru python3-requests-oauthlib python3-rgw python3-routes 2026-03-09T14:24:03.203 INFO:teuthology.orchestra.run.vm03.stdout: python3-rsa python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:24:03.203 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:24:03.203 INFO:teuthology.orchestra.run.vm03.stdout: python3-threadpoolctl python3-toml python3-waitress python3-wcwidth 2026-03-09T14:24:03.204 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T14:24:03.204 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile qttranslations5-l10n radosgw rbd-fuse smartmontools 2026-03-09T14:24:03.204 INFO:teuthology.orchestra.run.vm03.stdout: socat unzip xmlstarlet zip 2026-03-09T14:24:03.204 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be upgraded: 2026-03-09T14:24:03.204 INFO:teuthology.orchestra.run.vm03.stdout: librados2 librbd1 2026-03-09T14:24:03.294 INFO:teuthology.orchestra.run.vm05.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:24:03.294 INFO:teuthology.orchestra.run.vm05.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T14:24:03.294 INFO:teuthology.orchestra.run.vm05.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-09T14:24:03.294 INFO:teuthology.orchestra.run.vm05.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:24:03.295 INFO:teuthology.orchestra.run.vm05.stdout:The following additional packages will be installed: 2026-03-09T14:24:03.295 INFO:teuthology.orchestra.run.vm05.stdout: ceph-base ceph-mgr-cephadm ceph-mgr-dashboard ceph-mgr-diskprediction-local 2026-03-09T14:24:03.295 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-k8sevents ceph-mgr-modules-core ceph-mon ceph-osd jq 2026-03-09T14:24:03.295 INFO:teuthology.orchestra.run.vm05.stdout: libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T14:24:03.295 INFO:teuthology.orchestra.run.vm05.stdout: liboath0 libonig5 libpcre2-16-0 libqt5core5a libqt5dbus5 libqt5network5 2026-03-09T14:24:03.295 INFO:teuthology.orchestra.run.vm05.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph 2026-03-09T14:24:03.295 INFO:teuthology.orchestra.run.vm05.stdout: libthrift-0.16.0 lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T14:24:03.295 INFO:teuthology.orchestra.run.vm05.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T14:24:03.299 INFO:teuthology.orchestra.run.vm05.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T14:24:03.299 INFO:teuthology.orchestra.run.vm05.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-09T14:24:03.299 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T14:24:03.299 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T14:24:03.299 INFO:teuthology.orchestra.run.vm05.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T14:24:03.299 INFO:teuthology.orchestra.run.vm05.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-09T14:24:03.299 INFO:teuthology.orchestra.run.vm05.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-09T14:24:03.299 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyinotify python3-pytest python3-repoze.lru 2026-03-09T14:24:03.299 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T14:24:03.299 INFO:teuthology.orchestra.run.vm05.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T14:24:03.299 INFO:teuthology.orchestra.run.vm05.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T14:24:03.299 INFO:teuthology.orchestra.run.vm05.stdout: python3-toml python3-waitress python3-wcwidth python3-webob 2026-03-09T14:24:03.299 INFO:teuthology.orchestra.run.vm05.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:24:03.299 INFO:teuthology.orchestra.run.vm05.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-09T14:24:03.299 INFO:teuthology.orchestra.run.vm05.stdout:Suggested packages: 2026-03-09T14:24:03.299 INFO:teuthology.orchestra.run.vm05.stdout: python3-influxdb readline-doc python3-beaker python-mako-doc 2026-03-09T14:24:03.299 INFO:teuthology.orchestra.run.vm05.stdout: python-natsort-doc httpd-wsgi libapache2-mod-python libapache2-mod-scgi 2026-03-09T14:24:03.299 INFO:teuthology.orchestra.run.vm05.stdout: libjs-mochikit python-pecan-doc python-psutil-doc subversion 2026-03-09T14:24:03.299 INFO:teuthology.orchestra.run.vm05.stdout: python-pygments-doc ttf-bitstream-vera python-pyinotify-doc python3-dap 2026-03-09T14:24:03.299 INFO:teuthology.orchestra.run.vm05.stdout: python-sklearn-doc ipython3 python-waitress-doc python-webob-doc 2026-03-09T14:24:03.299 INFO:teuthology.orchestra.run.vm05.stdout: python-webtest-doc python-werkzeug-doc python3-watchdog gsmartcontrol 2026-03-09T14:24:03.299 INFO:teuthology.orchestra.run.vm05.stdout: smart-notifier mailx | mailutils 2026-03-09T14:24:03.299 INFO:teuthology.orchestra.run.vm05.stdout:Recommended packages: 2026-03-09T14:24:03.299 INFO:teuthology.orchestra.run.vm05.stdout: btrfs-tools 2026-03-09T14:24:03.317 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-09T14:24:03.318 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-09T14:24:03.343 INFO:teuthology.orchestra.run.vm05.stdout:The following NEW packages will be installed: 2026-03-09T14:24:03.344 INFO:teuthology.orchestra.run.vm05.stdout: ceph ceph-base ceph-common ceph-fuse ceph-mds ceph-mgr ceph-mgr-cephadm 2026-03-09T14:24:03.344 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-k8sevents 2026-03-09T14:24:03.344 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core ceph-mon ceph-osd ceph-test ceph-volume cephadm jq 2026-03-09T14:24:03.344 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs-dev libcephfs2 libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 2026-03-09T14:24:03.344 INFO:teuthology.orchestra.run.vm05.stdout: liblua5.3-dev libnbd0 liboath0 libonig5 libpcre2-16-0 libqt5core5a 2026-03-09T14:24:03.344 INFO:teuthology.orchestra.run.vm05.stdout: libqt5dbus5 libqt5network5 libradosstriper1 librdkafka1 libreadline-dev 2026-03-09T14:24:03.344 INFO:teuthology.orchestra.run.vm05.stdout: librgw2 libsqlite3-mod-ceph libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-09T14:24:03.344 INFO:teuthology.orchestra.run.vm05.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T14:24:03.344 INFO:teuthology.orchestra.run.vm05.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T14:24:03.344 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-argparse python3-ceph-common python3-cephfs python3-cheroot 2026-03-09T14:24:03.344 INFO:teuthology.orchestra.run.vm05.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-09T14:24:03.344 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T14:24:03.344 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T14:24:03.344 INFO:teuthology.orchestra.run.vm05.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T14:24:03.344 INFO:teuthology.orchestra.run.vm05.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-09T14:24:03.344 INFO:teuthology.orchestra.run.vm05.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-09T14:24:03.344 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyinotify python3-pytest python3-rados python3-rbd 2026-03-09T14:24:03.344 INFO:teuthology.orchestra.run.vm05.stdout: python3-repoze.lru python3-requests-oauthlib python3-rgw python3-routes 2026-03-09T14:24:03.344 INFO:teuthology.orchestra.run.vm05.stdout: python3-rsa python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:24:03.344 INFO:teuthology.orchestra.run.vm05.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:24:03.344 INFO:teuthology.orchestra.run.vm05.stdout: python3-threadpoolctl python3-toml python3-waitress python3-wcwidth 2026-03-09T14:24:03.344 INFO:teuthology.orchestra.run.vm05.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T14:24:03.344 INFO:teuthology.orchestra.run.vm05.stdout: python3-zc.lockfile qttranslations5-l10n radosgw rbd-fuse smartmontools 2026-03-09T14:24:03.344 INFO:teuthology.orchestra.run.vm05.stdout: socat unzip xmlstarlet zip 2026-03-09T14:24:03.344 INFO:teuthology.orchestra.run.vm05.stdout:The following packages will be upgraded: 2026-03-09T14:24:03.344 INFO:teuthology.orchestra.run.vm05.stdout: librados2 librbd1 2026-03-09T14:24:03.458 INFO:teuthology.orchestra.run.vm03.stdout:2 upgraded, 107 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T14:24:03.458 INFO:teuthology.orchestra.run.vm03.stdout:Need to get 178 MB of archives. 2026-03-09T14:24:03.458 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 782 MB of additional disk space will be used. 2026-03-09T14:24:03.458 INFO:teuthology.orchestra.run.vm03.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblttng-ust1 amd64 2.13.1-1ubuntu1 [190 kB] 2026-03-09T14:24:03.533 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:24:03.533 INFO:teuthology.orchestra.run.vm04.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T14:24:03.534 INFO:teuthology.orchestra.run.vm04.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-09T14:24:03.534 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:24:03.534 INFO:teuthology.orchestra.run.vm04.stdout:The following additional packages will be installed: 2026-03-09T14:24:03.534 INFO:teuthology.orchestra.run.vm04.stdout: ceph-base ceph-mgr-cephadm ceph-mgr-dashboard ceph-mgr-diskprediction-local 2026-03-09T14:24:03.534 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-k8sevents ceph-mgr-modules-core ceph-mon ceph-osd jq 2026-03-09T14:24:03.535 INFO:teuthology.orchestra.run.vm04.stdout: libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T14:24:03.535 INFO:teuthology.orchestra.run.vm04.stdout: liboath0 libonig5 libpcre2-16-0 libqt5core5a libqt5dbus5 libqt5network5 2026-03-09T14:24:03.535 INFO:teuthology.orchestra.run.vm04.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph 2026-03-09T14:24:03.535 INFO:teuthology.orchestra.run.vm04.stdout: libthrift-0.16.0 lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T14:24:03.535 INFO:teuthology.orchestra.run.vm04.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T14:24:03.535 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T14:24:03.535 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-09T14:24:03.536 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T14:24:03.536 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T14:24:03.536 INFO:teuthology.orchestra.run.vm04.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T14:24:03.536 INFO:teuthology.orchestra.run.vm04.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-09T14:24:03.536 INFO:teuthology.orchestra.run.vm04.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-09T14:24:03.536 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyinotify python3-pytest python3-repoze.lru 2026-03-09T14:24:03.536 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T14:24:03.536 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T14:24:03.536 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T14:24:03.536 INFO:teuthology.orchestra.run.vm04.stdout: python3-toml python3-waitress python3-wcwidth python3-webob 2026-03-09T14:24:03.536 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:24:03.536 INFO:teuthology.orchestra.run.vm04.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-09T14:24:03.537 INFO:teuthology.orchestra.run.vm04.stdout:Suggested packages: 2026-03-09T14:24:03.537 INFO:teuthology.orchestra.run.vm04.stdout: python3-influxdb readline-doc python3-beaker python-mako-doc 2026-03-09T14:24:03.537 INFO:teuthology.orchestra.run.vm04.stdout: python-natsort-doc httpd-wsgi libapache2-mod-python libapache2-mod-scgi 2026-03-09T14:24:03.537 INFO:teuthology.orchestra.run.vm04.stdout: libjs-mochikit python-pecan-doc python-psutil-doc subversion 2026-03-09T14:24:03.537 INFO:teuthology.orchestra.run.vm04.stdout: python-pygments-doc ttf-bitstream-vera python-pyinotify-doc python3-dap 2026-03-09T14:24:03.537 INFO:teuthology.orchestra.run.vm04.stdout: python-sklearn-doc ipython3 python-waitress-doc python-webob-doc 2026-03-09T14:24:03.537 INFO:teuthology.orchestra.run.vm04.stdout: python-webtest-doc python-werkzeug-doc python3-watchdog gsmartcontrol 2026-03-09T14:24:03.537 INFO:teuthology.orchestra.run.vm04.stdout: smart-notifier mailx | mailutils 2026-03-09T14:24:03.537 INFO:teuthology.orchestra.run.vm04.stdout:Recommended packages: 2026-03-09T14:24:03.537 INFO:teuthology.orchestra.run.vm04.stdout: btrfs-tools 2026-03-09T14:24:03.550 INFO:teuthology.orchestra.run.vm05.stdout:2 upgraded, 107 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T14:24:03.550 INFO:teuthology.orchestra.run.vm05.stdout:Need to get 178 MB of archives. 2026-03-09T14:24:03.550 INFO:teuthology.orchestra.run.vm05.stdout:After this operation, 782 MB of additional disk space will be used. 2026-03-09T14:24:03.550 INFO:teuthology.orchestra.run.vm05.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblttng-ust1 amd64 2.13.1-1ubuntu1 [190 kB] 2026-03-09T14:24:03.582 INFO:teuthology.orchestra.run.vm04.stdout:The following NEW packages will be installed: 2026-03-09T14:24:03.582 INFO:teuthology.orchestra.run.vm04.stdout: ceph ceph-base ceph-common ceph-fuse ceph-mds ceph-mgr ceph-mgr-cephadm 2026-03-09T14:24:03.582 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-k8sevents 2026-03-09T14:24:03.582 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core ceph-mon ceph-osd ceph-test ceph-volume cephadm jq 2026-03-09T14:24:03.582 INFO:teuthology.orchestra.run.vm04.stdout: libcephfs-dev libcephfs2 libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 2026-03-09T14:24:03.582 INFO:teuthology.orchestra.run.vm04.stdout: liblua5.3-dev libnbd0 liboath0 libonig5 libpcre2-16-0 libqt5core5a 2026-03-09T14:24:03.582 INFO:teuthology.orchestra.run.vm04.stdout: libqt5dbus5 libqt5network5 libradosstriper1 librdkafka1 libreadline-dev 2026-03-09T14:24:03.582 INFO:teuthology.orchestra.run.vm04.stdout: librgw2 libsqlite3-mod-ceph libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-09T14:24:03.583 INFO:teuthology.orchestra.run.vm04.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T14:24:03.583 INFO:teuthology.orchestra.run.vm04.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T14:24:03.583 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-argparse python3-ceph-common python3-cephfs python3-cheroot 2026-03-09T14:24:03.583 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-09T14:24:03.583 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T14:24:03.583 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T14:24:03.583 INFO:teuthology.orchestra.run.vm04.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T14:24:03.583 INFO:teuthology.orchestra.run.vm04.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-09T14:24:03.583 INFO:teuthology.orchestra.run.vm04.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-09T14:24:03.583 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyinotify python3-pytest python3-rados python3-rbd 2026-03-09T14:24:03.583 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze.lru python3-requests-oauthlib python3-rgw python3-routes 2026-03-09T14:24:03.583 INFO:teuthology.orchestra.run.vm04.stdout: python3-rsa python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:24:03.584 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:24:03.584 INFO:teuthology.orchestra.run.vm04.stdout: python3-threadpoolctl python3-toml python3-waitress python3-wcwidth 2026-03-09T14:24:03.584 INFO:teuthology.orchestra.run.vm04.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T14:24:03.584 INFO:teuthology.orchestra.run.vm04.stdout: python3-zc.lockfile qttranslations5-l10n radosgw rbd-fuse smartmontools 2026-03-09T14:24:03.584 INFO:teuthology.orchestra.run.vm04.stdout: socat unzip xmlstarlet zip 2026-03-09T14:24:03.584 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be upgraded: 2026-03-09T14:24:03.585 INFO:teuthology.orchestra.run.vm04.stdout: librados2 librbd1 2026-03-09T14:24:03.703 INFO:teuthology.orchestra.run.vm05.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libdouble-conversion3 amd64 3.1.7-4 [39.0 kB] 2026-03-09T14:24:03.708 INFO:teuthology.orchestra.run.vm05.stdout:Get:3 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libpcre2-16-0 amd64 10.39-3ubuntu0.1 [203 kB] 2026-03-09T14:24:03.739 INFO:teuthology.orchestra.run.vm05.stdout:Get:4 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5core5a amd64 5.15.3+dfsg-2ubuntu0.2 [2006 kB] 2026-03-09T14:24:03.760 INFO:teuthology.orchestra.run.vm03.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libdouble-conversion3 amd64 3.1.7-4 [39.0 kB] 2026-03-09T14:24:03.798 INFO:teuthology.orchestra.run.vm04.stdout:2 upgraded, 107 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T14:24:03.798 INFO:teuthology.orchestra.run.vm04.stdout:Need to get 178 MB of archives. 2026-03-09T14:24:03.798 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 782 MB of additional disk space will be used. 2026-03-09T14:24:03.798 INFO:teuthology.orchestra.run.vm04.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblttng-ust1 amd64 2.13.1-1ubuntu1 [190 kB] 2026-03-09T14:24:03.804 INFO:teuthology.orchestra.run.vm03.stdout:Get:3 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librbd1 amd64 19.2.3-678-ge911bdeb-1jammy [3257 kB] 2026-03-09T14:24:03.812 INFO:teuthology.orchestra.run.vm03.stdout:Get:4 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libpcre2-16-0 amd64 10.39-3ubuntu0.1 [203 kB] 2026-03-09T14:24:03.831 INFO:teuthology.orchestra.run.vm05.stdout:Get:5 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5dbus5 amd64 5.15.3+dfsg-2ubuntu0.2 [222 kB] 2026-03-09T14:24:03.835 INFO:teuthology.orchestra.run.vm05.stdout:Get:6 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5network5 amd64 5.15.3+dfsg-2ubuntu0.2 [731 kB] 2026-03-09T14:24:03.847 INFO:teuthology.orchestra.run.vm05.stdout:Get:7 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libthrift-0.16.0 amd64 0.16.0-2 [267 kB] 2026-03-09T14:24:03.851 INFO:teuthology.orchestra.run.vm05.stdout:Get:8 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libnbd0 amd64 1.10.5-1 [71.3 kB] 2026-03-09T14:24:03.852 INFO:teuthology.orchestra.run.vm05.stdout:Get:9 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-wcwidth all 0.2.5+dfsg1-1 [21.9 kB] 2026-03-09T14:24:03.852 INFO:teuthology.orchestra.run.vm05.stdout:Get:10 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-prettytable all 2.5.0-2 [31.3 kB] 2026-03-09T14:24:03.852 INFO:teuthology.orchestra.run.vm05.stdout:Get:11 https://archive.ubuntu.com/ubuntu jammy/universe amd64 librdkafka1 amd64 1.8.0-1build1 [633 kB] 2026-03-09T14:24:03.859 INFO:teuthology.orchestra.run.vm03.stdout:Get:5 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5core5a amd64 5.15.3+dfsg-2ubuntu0.2 [2006 kB] 2026-03-09T14:24:03.860 INFO:teuthology.orchestra.run.vm05.stdout:Get:12 https://archive.ubuntu.com/ubuntu jammy/main amd64 libreadline-dev amd64 8.1.2-1 [166 kB] 2026-03-09T14:24:03.862 INFO:teuthology.orchestra.run.vm05.stdout:Get:13 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblua5.3-dev amd64 5.3.6-1build1 [167 kB] 2026-03-09T14:24:03.864 INFO:teuthology.orchestra.run.vm05.stdout:Get:14 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua5.1 amd64 5.1.5-8.1build4 [94.6 kB] 2026-03-09T14:24:03.896 INFO:teuthology.orchestra.run.vm05.stdout:Get:15 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-any all 27ubuntu1 [5034 B] 2026-03-09T14:24:03.896 INFO:teuthology.orchestra.run.vm05.stdout:Get:16 https://archive.ubuntu.com/ubuntu jammy/main amd64 zip amd64 3.0-12build2 [176 kB] 2026-03-09T14:24:03.897 INFO:teuthology.orchestra.run.vm05.stdout:Get:17 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 unzip amd64 6.0-26ubuntu3.2 [175 kB] 2026-03-09T14:24:03.899 INFO:teuthology.orchestra.run.vm05.stdout:Get:18 https://archive.ubuntu.com/ubuntu jammy/universe amd64 luarocks all 3.8.0+dfsg1-1 [140 kB] 2026-03-09T14:24:03.900 INFO:teuthology.orchestra.run.vm05.stdout:Get:19 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 liboath0 amd64 2.6.7-3ubuntu0.1 [41.3 kB] 2026-03-09T14:24:03.900 INFO:teuthology.orchestra.run.vm05.stdout:Get:20 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.functools all 3.4.0-2 [9030 B] 2026-03-09T14:24:03.901 INFO:teuthology.orchestra.run.vm05.stdout:Get:21 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-cheroot all 8.5.2+ds1-1ubuntu3.1 [71.1 kB] 2026-03-09T14:24:03.901 INFO:teuthology.orchestra.run.vm05.stdout:Get:22 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.classes all 3.2.1-3 [6452 B] 2026-03-09T14:24:03.929 INFO:teuthology.orchestra.run.vm05.stdout:Get:23 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.text all 3.6.0-2 [8716 B] 2026-03-09T14:24:03.929 INFO:teuthology.orchestra.run.vm05.stdout:Get:24 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.collections all 3.4.0-2 [11.4 kB] 2026-03-09T14:24:03.929 INFO:teuthology.orchestra.run.vm05.stdout:Get:25 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempora all 4.1.2-1 [14.8 kB] 2026-03-09T14:24:03.947 INFO:teuthology.orchestra.run.vm05.stdout:Get:26 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librbd1 amd64 19.2.3-678-ge911bdeb-1jammy [3257 kB] 2026-03-09T14:24:03.962 INFO:teuthology.orchestra.run.vm05.stdout:Get:27 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-portend all 3.0.0-1 [7240 B] 2026-03-09T14:24:03.962 INFO:teuthology.orchestra.run.vm05.stdout:Get:28 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-zc.lockfile all 2.0-1 [8980 B] 2026-03-09T14:24:03.962 INFO:teuthology.orchestra.run.vm05.stdout:Get:29 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cherrypy3 all 18.6.1-4 [208 kB] 2026-03-09T14:24:03.965 INFO:teuthology.orchestra.run.vm05.stdout:Get:30 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-natsort all 8.0.2-1 [35.3 kB] 2026-03-09T14:24:03.965 INFO:teuthology.orchestra.run.vm05.stdout:Get:31 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-logutils all 0.3.3-8 [17.6 kB] 2026-03-09T14:24:03.965 INFO:teuthology.orchestra.run.vm05.stdout:Get:32 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-mako all 1.1.3+ds1-2ubuntu0.1 [60.5 kB] 2026-03-09T14:24:03.965 INFO:teuthology.orchestra.run.vm05.stdout:Get:33 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplegeneric all 0.8.1-3 [11.3 kB] 2026-03-09T14:24:03.967 INFO:teuthology.orchestra.run.vm04.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libdouble-conversion3 amd64 3.1.7-4 [39.0 kB] 2026-03-09T14:24:03.972 INFO:teuthology.orchestra.run.vm04.stdout:Get:3 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libpcre2-16-0 amd64 10.39-3ubuntu0.1 [203 kB] 2026-03-09T14:24:03.994 INFO:teuthology.orchestra.run.vm05.stdout:Get:34 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-singledispatch all 3.4.0.3-3 [7320 B] 2026-03-09T14:24:03.995 INFO:teuthology.orchestra.run.vm05.stdout:Get:35 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-webob all 1:1.8.6-1.1ubuntu0.1 [86.7 kB] 2026-03-09T14:24:03.995 INFO:teuthology.orchestra.run.vm05.stdout:Get:36 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-waitress all 1.4.4-1.1ubuntu1.1 [47.0 kB] 2026-03-09T14:24:04.007 INFO:teuthology.orchestra.run.vm04.stdout:Get:4 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5core5a amd64 5.15.3+dfsg-2ubuntu0.2 [2006 kB] 2026-03-09T14:24:04.027 INFO:teuthology.orchestra.run.vm05.stdout:Get:37 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempita all 0.5.2-6ubuntu1 [15.1 kB] 2026-03-09T14:24:04.027 INFO:teuthology.orchestra.run.vm05.stdout:Get:38 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-paste all 3.5.0+dfsg1-1 [456 kB] 2026-03-09T14:24:04.035 INFO:teuthology.orchestra.run.vm05.stdout:Get:39 https://archive.ubuntu.com/ubuntu jammy/main amd64 python-pastedeploy-tpl all 2.1.1-1 [4892 B] 2026-03-09T14:24:04.035 INFO:teuthology.orchestra.run.vm05.stdout:Get:40 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastedeploy all 2.1.1-1 [26.6 kB] 2026-03-09T14:24:04.036 INFO:teuthology.orchestra.run.vm05.stdout:Get:41 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-webtest all 2.0.35-1 [28.5 kB] 2026-03-09T14:24:04.036 INFO:teuthology.orchestra.run.vm05.stdout:Get:42 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pecan all 1.3.3-4ubuntu2 [87.3 kB] 2026-03-09T14:24:04.036 INFO:teuthology.orchestra.run.vm05.stdout:Get:43 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-werkzeug all 2.0.2+dfsg1-1ubuntu0.22.04.3 [181 kB] 2026-03-09T14:24:04.060 INFO:teuthology.orchestra.run.vm05.stdout:Get:44 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libfuse2 amd64 2.9.9-5ubuntu3 [90.3 kB] 2026-03-09T14:24:04.061 INFO:teuthology.orchestra.run.vm05.stdout:Get:45 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python3-asyncssh all 2.5.0-1ubuntu0.1 [189 kB] 2026-03-09T14:24:04.063 INFO:teuthology.orchestra.run.vm05.stdout:Get:46 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-repoze.lru all 0.7-2 [12.1 kB] 2026-03-09T14:24:04.092 INFO:teuthology.orchestra.run.vm05.stdout:Get:47 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-routes all 2.5.1-1ubuntu1 [89.0 kB] 2026-03-09T14:24:04.093 INFO:teuthology.orchestra.run.vm05.stdout:Get:48 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn-lib amd64 0.23.2-5ubuntu6 [2058 kB] 2026-03-09T14:24:04.154 INFO:teuthology.orchestra.run.vm04.stdout:Get:5 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5dbus5 amd64 5.15.3+dfsg-2ubuntu0.2 [222 kB] 2026-03-09T14:24:04.155 INFO:teuthology.orchestra.run.vm04.stdout:Get:6 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5network5 amd64 5.15.3+dfsg-2ubuntu0.2 [731 kB] 2026-03-09T14:24:04.160 INFO:teuthology.orchestra.run.vm04.stdout:Get:7 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libthrift-0.16.0 amd64 0.16.0-2 [267 kB] 2026-03-09T14:24:04.162 INFO:teuthology.orchestra.run.vm04.stdout:Get:8 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libnbd0 amd64 1.10.5-1 [71.3 kB] 2026-03-09T14:24:04.162 INFO:teuthology.orchestra.run.vm04.stdout:Get:9 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-wcwidth all 0.2.5+dfsg1-1 [21.9 kB] 2026-03-09T14:24:04.163 INFO:teuthology.orchestra.run.vm04.stdout:Get:10 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-prettytable all 2.5.0-2 [31.3 kB] 2026-03-09T14:24:04.163 INFO:teuthology.orchestra.run.vm04.stdout:Get:11 https://archive.ubuntu.com/ubuntu jammy/universe amd64 librdkafka1 amd64 1.8.0-1build1 [633 kB] 2026-03-09T14:24:04.164 INFO:teuthology.orchestra.run.vm05.stdout:Get:49 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-joblib all 0.17.0-4ubuntu1 [204 kB] 2026-03-09T14:24:04.165 INFO:teuthology.orchestra.run.vm05.stdout:Get:50 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-threadpoolctl all 3.1.0-1 [21.3 kB] 2026-03-09T14:24:04.166 INFO:teuthology.orchestra.run.vm05.stdout:Get:51 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn all 0.23.2-5ubuntu6 [1829 kB] 2026-03-09T14:24:04.167 INFO:teuthology.orchestra.run.vm04.stdout:Get:12 https://archive.ubuntu.com/ubuntu jammy/main amd64 libreadline-dev amd64 8.1.2-1 [166 kB] 2026-03-09T14:24:04.168 INFO:teuthology.orchestra.run.vm04.stdout:Get:13 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblua5.3-dev amd64 5.3.6-1build1 [167 kB] 2026-03-09T14:24:04.172 INFO:teuthology.orchestra.run.vm03.stdout:Get:6 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5dbus5 amd64 5.15.3+dfsg-2ubuntu0.2 [222 kB] 2026-03-09T14:24:04.182 INFO:teuthology.orchestra.run.vm03.stdout:Get:7 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5network5 amd64 5.15.3+dfsg-2ubuntu0.2 [731 kB] 2026-03-09T14:24:04.191 INFO:teuthology.orchestra.run.vm04.stdout:Get:14 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua5.1 amd64 5.1.5-8.1build4 [94.6 kB] 2026-03-09T14:24:04.196 INFO:teuthology.orchestra.run.vm04.stdout:Get:15 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librbd1 amd64 19.2.3-678-ge911bdeb-1jammy [3257 kB] 2026-03-09T14:24:04.198 INFO:teuthology.orchestra.run.vm05.stdout:Get:52 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cachetools all 5.0.0-1 [9722 B] 2026-03-09T14:24:04.198 INFO:teuthology.orchestra.run.vm05.stdout:Get:53 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-rsa all 4.8-1 [28.4 kB] 2026-03-09T14:24:04.198 INFO:teuthology.orchestra.run.vm05.stdout:Get:54 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-google-auth all 1.5.1-3 [35.7 kB] 2026-03-09T14:24:04.198 INFO:teuthology.orchestra.run.vm05.stdout:Get:55 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-requests-oauthlib all 1.3.0+ds-0.1 [18.7 kB] 2026-03-09T14:24:04.198 INFO:teuthology.orchestra.run.vm05.stdout:Get:56 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-websocket all 1.2.3-1 [34.7 kB] 2026-03-09T14:24:04.199 INFO:teuthology.orchestra.run.vm05.stdout:Get:57 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-kubernetes all 12.0.1-1ubuntu1 [353 kB] 2026-03-09T14:24:04.202 INFO:teuthology.orchestra.run.vm05.stdout:Get:58 https://archive.ubuntu.com/ubuntu jammy/main amd64 libonig5 amd64 6.9.7.1-2build1 [172 kB] 2026-03-09T14:24:04.205 INFO:teuthology.orchestra.run.vm04.stdout:Get:16 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-any all 27ubuntu1 [5034 B] 2026-03-09T14:24:04.206 INFO:teuthology.orchestra.run.vm04.stdout:Get:17 https://archive.ubuntu.com/ubuntu jammy/main amd64 zip amd64 3.0-12build2 [176 kB] 2026-03-09T14:24:04.207 INFO:teuthology.orchestra.run.vm04.stdout:Get:18 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 unzip amd64 6.0-26ubuntu3.2 [175 kB] 2026-03-09T14:24:04.208 INFO:teuthology.orchestra.run.vm04.stdout:Get:19 https://archive.ubuntu.com/ubuntu jammy/universe amd64 luarocks all 3.8.0+dfsg1-1 [140 kB] 2026-03-09T14:24:04.209 INFO:teuthology.orchestra.run.vm04.stdout:Get:20 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 liboath0 amd64 2.6.7-3ubuntu0.1 [41.3 kB] 2026-03-09T14:24:04.210 INFO:teuthology.orchestra.run.vm04.stdout:Get:21 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.functools all 3.4.0-2 [9030 B] 2026-03-09T14:24:04.211 INFO:teuthology.orchestra.run.vm04.stdout:Get:22 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-cheroot all 8.5.2+ds1-1ubuntu3.1 [71.1 kB] 2026-03-09T14:24:04.216 INFO:teuthology.orchestra.run.vm03.stdout:Get:8 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libthrift-0.16.0 amd64 0.16.0-2 [267 kB] 2026-03-09T14:24:04.227 INFO:teuthology.orchestra.run.vm04.stdout:Get:23 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.classes all 3.2.1-3 [6452 B] 2026-03-09T14:24:04.227 INFO:teuthology.orchestra.run.vm04.stdout:Get:24 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.text all 3.6.0-2 [8716 B] 2026-03-09T14:24:04.230 INFO:teuthology.orchestra.run.vm05.stdout:Get:59 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libjq1 amd64 1.6-2.1ubuntu3.1 [133 kB] 2026-03-09T14:24:04.231 INFO:teuthology.orchestra.run.vm05.stdout:Get:60 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 jq amd64 1.6-2.1ubuntu3.1 [52.5 kB] 2026-03-09T14:24:04.236 INFO:teuthology.orchestra.run.vm05.stdout:Get:61 https://archive.ubuntu.com/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB] 2026-03-09T14:24:04.240 INFO:teuthology.orchestra.run.vm05.stdout:Get:62 https://archive.ubuntu.com/ubuntu jammy/universe amd64 xmlstarlet amd64 1.6.1-2.1 [265 kB] 2026-03-09T14:24:04.243 INFO:teuthology.orchestra.run.vm05.stdout:Get:63 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-socket amd64 3.0~rc1+git+ac3201d-6 [78.9 kB] 2026-03-09T14:24:04.243 INFO:teuthology.orchestra.run.vm05.stdout:Get:64 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-sec amd64 1.0.2-1 [37.6 kB] 2026-03-09T14:24:04.244 INFO:teuthology.orchestra.run.vm05.stdout:Get:65 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 nvme-cli amd64 1.16-3ubuntu0.3 [474 kB] 2026-03-09T14:24:04.248 INFO:teuthology.orchestra.run.vm05.stdout:Get:66 https://archive.ubuntu.com/ubuntu jammy/main amd64 pkg-config amd64 0.29.2-1ubuntu3 [48.2 kB] 2026-03-09T14:24:04.248 INFO:teuthology.orchestra.run.vm05.stdout:Get:67 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python-asyncssh-doc all 2.5.0-1ubuntu0.1 [309 kB] 2026-03-09T14:24:04.251 INFO:teuthology.orchestra.run.vm04.stdout:Get:25 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.collections all 3.4.0-2 [11.4 kB] 2026-03-09T14:24:04.251 INFO:teuthology.orchestra.run.vm04.stdout:Get:26 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempora all 4.1.2-1 [14.8 kB] 2026-03-09T14:24:04.252 INFO:teuthology.orchestra.run.vm04.stdout:Get:27 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-portend all 3.0.0-1 [7240 B] 2026-03-09T14:24:04.252 INFO:teuthology.orchestra.run.vm04.stdout:Get:28 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-zc.lockfile all 2.0-1 [8980 B] 2026-03-09T14:24:04.252 INFO:teuthology.orchestra.run.vm04.stdout:Get:29 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cherrypy3 all 18.6.1-4 [208 kB] 2026-03-09T14:24:04.253 INFO:teuthology.orchestra.run.vm04.stdout:Get:30 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-natsort all 8.0.2-1 [35.3 kB] 2026-03-09T14:24:04.254 INFO:teuthology.orchestra.run.vm04.stdout:Get:31 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-logutils all 0.3.3-8 [17.6 kB] 2026-03-09T14:24:04.262 INFO:teuthology.orchestra.run.vm04.stdout:Get:32 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-mako all 1.1.3+ds1-2ubuntu0.1 [60.5 kB] 2026-03-09T14:24:04.262 INFO:teuthology.orchestra.run.vm05.stdout:Get:68 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-iniconfig all 1.1.1-2 [6024 B] 2026-03-09T14:24:04.282 INFO:teuthology.orchestra.run.vm05.stdout:Get:69 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastescript all 2.0.2-4 [54.6 kB] 2026-03-09T14:24:04.282 INFO:teuthology.orchestra.run.vm05.stdout:Get:70 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pluggy all 0.13.0-7.1 [19.0 kB] 2026-03-09T14:24:04.282 INFO:teuthology.orchestra.run.vm05.stdout:Get:71 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-psutil amd64 5.9.0-1build1 [158 kB] 2026-03-09T14:24:04.284 INFO:teuthology.orchestra.run.vm05.stdout:Get:72 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-py all 1.10.0-1 [71.9 kB] 2026-03-09T14:24:04.284 INFO:teuthology.orchestra.run.vm05.stdout:Get:73 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-pygments all 2.11.2+dfsg-2ubuntu0.1 [750 kB] 2026-03-09T14:24:04.292 INFO:teuthology.orchestra.run.vm05.stdout:Get:74 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pyinotify all 0.9.6-1.3 [24.8 kB] 2026-03-09T14:24:04.292 INFO:teuthology.orchestra.run.vm05.stdout:Get:75 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-toml all 0.10.2-1 [16.5 kB] 2026-03-09T14:24:04.292 INFO:teuthology.orchestra.run.vm04.stdout:Get:33 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplegeneric all 0.8.1-3 [11.3 kB] 2026-03-09T14:24:04.292 INFO:teuthology.orchestra.run.vm04.stdout:Get:34 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-singledispatch all 3.4.0.3-3 [7320 B] 2026-03-09T14:24:04.293 INFO:teuthology.orchestra.run.vm04.stdout:Get:35 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-webob all 1:1.8.6-1.1ubuntu0.1 [86.7 kB] 2026-03-09T14:24:04.293 INFO:teuthology.orchestra.run.vm04.stdout:Get:36 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-waitress all 1.4.4-1.1ubuntu1.1 [47.0 kB] 2026-03-09T14:24:04.294 INFO:teuthology.orchestra.run.vm04.stdout:Get:37 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempita all 0.5.2-6ubuntu1 [15.1 kB] 2026-03-09T14:24:04.294 INFO:teuthology.orchestra.run.vm04.stdout:Get:38 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-paste all 3.5.0+dfsg1-1 [456 kB] 2026-03-09T14:24:04.299 INFO:teuthology.orchestra.run.vm05.stdout:Get:76 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pytest all 6.2.5-1ubuntu2 [214 kB] 2026-03-09T14:24:04.300 INFO:teuthology.orchestra.run.vm04.stdout:Get:39 https://archive.ubuntu.com/ubuntu jammy/main amd64 python-pastedeploy-tpl all 2.1.1-1 [4892 B] 2026-03-09T14:24:04.301 INFO:teuthology.orchestra.run.vm05.stdout:Get:77 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplejson amd64 3.17.6-1build1 [54.7 kB] 2026-03-09T14:24:04.301 INFO:teuthology.orchestra.run.vm04.stdout:Get:40 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastedeploy all 2.1.1-1 [26.6 kB] 2026-03-09T14:24:04.302 INFO:teuthology.orchestra.run.vm04.stdout:Get:41 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-webtest all 2.0.35-1 [28.5 kB] 2026-03-09T14:24:04.303 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:04 vm03 bash[17524]: cluster 2026-03-09T14:24:02.959708+0000 mgr.x (mgr.14150) 319 : cluster [DBG] pgmap v267: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:04.303 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:04 vm03 bash[17524]: cluster 2026-03-09T14:24:02.959708+0000 mgr.x (mgr.14150) 319 : cluster [DBG] pgmap v267: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:04.318 INFO:teuthology.orchestra.run.vm03.stdout:Get:9 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libnbd0 amd64 1.10.5-1 [71.3 kB] 2026-03-09T14:24:04.319 INFO:teuthology.orchestra.run.vm03.stdout:Get:10 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-wcwidth all 0.2.5+dfsg1-1 [21.9 kB] 2026-03-09T14:24:04.319 INFO:teuthology.orchestra.run.vm03.stdout:Get:11 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-prettytable all 2.5.0-2 [31.3 kB] 2026-03-09T14:24:04.319 INFO:teuthology.orchestra.run.vm03.stdout:Get:12 https://archive.ubuntu.com/ubuntu jammy/universe amd64 librdkafka1 amd64 1.8.0-1build1 [633 kB] 2026-03-09T14:24:04.323 INFO:teuthology.orchestra.run.vm03.stdout:Get:13 https://archive.ubuntu.com/ubuntu jammy/main amd64 libreadline-dev amd64 8.1.2-1 [166 kB] 2026-03-09T14:24:04.323 INFO:teuthology.orchestra.run.vm03.stdout:Get:14 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblua5.3-dev amd64 5.3.6-1build1 [167 kB] 2026-03-09T14:24:04.324 INFO:teuthology.orchestra.run.vm03.stdout:Get:15 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua5.1 amd64 5.1.5-8.1build4 [94.6 kB] 2026-03-09T14:24:04.325 INFO:teuthology.orchestra.run.vm03.stdout:Get:16 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-any all 27ubuntu1 [5034 B] 2026-03-09T14:24:04.325 INFO:teuthology.orchestra.run.vm03.stdout:Get:17 https://archive.ubuntu.com/ubuntu jammy/main amd64 zip amd64 3.0-12build2 [176 kB] 2026-03-09T14:24:04.332 INFO:teuthology.orchestra.run.vm05.stdout:Get:78 https://archive.ubuntu.com/ubuntu jammy/universe amd64 qttranslations5-l10n all 5.15.3-1 [1983 kB] 2026-03-09T14:24:04.340 INFO:teuthology.orchestra.run.vm04.stdout:Get:42 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pecan all 1.3.3-4ubuntu2 [87.3 kB] 2026-03-09T14:24:04.342 INFO:teuthology.orchestra.run.vm04.stdout:Get:43 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-werkzeug all 2.0.2+dfsg1-1ubuntu0.22.04.3 [181 kB] 2026-03-09T14:24:04.368 INFO:teuthology.orchestra.run.vm03.stdout:Get:18 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 unzip amd64 6.0-26ubuntu3.2 [175 kB] 2026-03-09T14:24:04.391 INFO:teuthology.orchestra.run.vm04.stdout:Get:44 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libfuse2 amd64 2.9.9-5ubuntu3 [90.3 kB] 2026-03-09T14:24:04.391 INFO:teuthology.orchestra.run.vm04.stdout:Get:45 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python3-asyncssh all 2.5.0-1ubuntu0.1 [189 kB] 2026-03-09T14:24:04.393 INFO:teuthology.orchestra.run.vm04.stdout:Get:46 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-repoze.lru all 0.7-2 [12.1 kB] 2026-03-09T14:24:04.393 INFO:teuthology.orchestra.run.vm04.stdout:Get:47 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-routes all 2.5.1-1ubuntu1 [89.0 kB] 2026-03-09T14:24:04.394 INFO:teuthology.orchestra.run.vm04.stdout:Get:48 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn-lib amd64 0.23.2-5ubuntu6 [2058 kB] 2026-03-09T14:24:04.400 INFO:teuthology.orchestra.run.vm05.stdout:Get:79 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 smartmontools amd64 7.2-1ubuntu0.1 [583 kB] 2026-03-09T14:24:04.416 INFO:teuthology.orchestra.run.vm04.stdout:Get:49 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-joblib all 0.17.0-4ubuntu1 [204 kB] 2026-03-09T14:24:04.417 INFO:teuthology.orchestra.run.vm04.stdout:Get:50 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-threadpoolctl all 3.1.0-1 [21.3 kB] 2026-03-09T14:24:04.417 INFO:teuthology.orchestra.run.vm04.stdout:Get:51 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn all 0.23.2-5ubuntu6 [1829 kB] 2026-03-09T14:24:04.442 INFO:teuthology.orchestra.run.vm03.stdout:Get:19 https://archive.ubuntu.com/ubuntu jammy/universe amd64 luarocks all 3.8.0+dfsg1-1 [140 kB] 2026-03-09T14:24:04.442 INFO:teuthology.orchestra.run.vm03.stdout:Get:20 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 liboath0 amd64 2.6.7-3ubuntu0.1 [41.3 kB] 2026-03-09T14:24:04.443 INFO:teuthology.orchestra.run.vm03.stdout:Get:21 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.functools all 3.4.0-2 [9030 B] 2026-03-09T14:24:04.443 INFO:teuthology.orchestra.run.vm03.stdout:Get:22 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-cheroot all 8.5.2+ds1-1ubuntu3.1 [71.1 kB] 2026-03-09T14:24:04.454 INFO:teuthology.orchestra.run.vm04.stdout:Get:52 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cachetools all 5.0.0-1 [9722 B] 2026-03-09T14:24:04.454 INFO:teuthology.orchestra.run.vm04.stdout:Get:53 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-rsa all 4.8-1 [28.4 kB] 2026-03-09T14:24:04.465 INFO:teuthology.orchestra.run.vm04.stdout:Get:54 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-google-auth all 1.5.1-3 [35.7 kB] 2026-03-09T14:24:04.465 INFO:teuthology.orchestra.run.vm04.stdout:Get:55 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-requests-oauthlib all 1.3.0+ds-0.1 [18.7 kB] 2026-03-09T14:24:04.466 INFO:teuthology.orchestra.run.vm04.stdout:Get:56 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-websocket all 1.2.3-1 [34.7 kB] 2026-03-09T14:24:04.466 INFO:teuthology.orchestra.run.vm04.stdout:Get:57 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-kubernetes all 12.0.1-1ubuntu1 [353 kB] 2026-03-09T14:24:04.475 INFO:teuthology.orchestra.run.vm04.stdout:Get:58 https://archive.ubuntu.com/ubuntu jammy/main amd64 libonig5 amd64 6.9.7.1-2build1 [172 kB] 2026-03-09T14:24:04.477 INFO:teuthology.orchestra.run.vm04.stdout:Get:59 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libjq1 amd64 1.6-2.1ubuntu3.1 [133 kB] 2026-03-09T14:24:04.479 INFO:teuthology.orchestra.run.vm04.stdout:Get:60 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 jq amd64 1.6-2.1ubuntu3.1 [52.5 kB] 2026-03-09T14:24:04.503 INFO:teuthology.orchestra.run.vm04.stdout:Get:61 https://archive.ubuntu.com/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB] 2026-03-09T14:24:04.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:04 vm05 bash[20070]: cluster 2026-03-09T14:24:02.959708+0000 mgr.x (mgr.14150) 319 : cluster [DBG] pgmap v267: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:04.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:04 vm05 bash[20070]: cluster 2026-03-09T14:24:02.959708+0000 mgr.x (mgr.14150) 319 : cluster [DBG] pgmap v267: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:04.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:04 vm04 bash[19581]: cluster 2026-03-09T14:24:02.959708+0000 mgr.x (mgr.14150) 319 : cluster [DBG] pgmap v267: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:04.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:04 vm04 bash[19581]: cluster 2026-03-09T14:24:02.959708+0000 mgr.x (mgr.14150) 319 : cluster [DBG] pgmap v267: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:04.509 INFO:teuthology.orchestra.run.vm04.stdout:Get:62 https://archive.ubuntu.com/ubuntu jammy/universe amd64 xmlstarlet amd64 1.6.1-2.1 [265 kB] 2026-03-09T14:24:04.511 INFO:teuthology.orchestra.run.vm03.stdout:Get:23 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.classes all 3.2.1-3 [6452 B] 2026-03-09T14:24:04.511 INFO:teuthology.orchestra.run.vm03.stdout:Get:24 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.text all 3.6.0-2 [8716 B] 2026-03-09T14:24:04.511 INFO:teuthology.orchestra.run.vm03.stdout:Get:25 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.collections all 3.4.0-2 [11.4 kB] 2026-03-09T14:24:04.512 INFO:teuthology.orchestra.run.vm03.stdout:Get:26 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempora all 4.1.2-1 [14.8 kB] 2026-03-09T14:24:04.512 INFO:teuthology.orchestra.run.vm03.stdout:Get:27 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-portend all 3.0.0-1 [7240 B] 2026-03-09T14:24:04.512 INFO:teuthology.orchestra.run.vm03.stdout:Get:28 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-zc.lockfile all 2.0-1 [8980 B] 2026-03-09T14:24:04.561 INFO:teuthology.orchestra.run.vm04.stdout:Get:63 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-socket amd64 3.0~rc1+git+ac3201d-6 [78.9 kB] 2026-03-09T14:24:04.562 INFO:teuthology.orchestra.run.vm04.stdout:Get:64 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-sec amd64 1.0.2-1 [37.6 kB] 2026-03-09T14:24:04.562 INFO:teuthology.orchestra.run.vm04.stdout:Get:65 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 nvme-cli amd64 1.16-3ubuntu0.3 [474 kB] 2026-03-09T14:24:04.564 INFO:teuthology.orchestra.run.vm04.stdout:Get:66 https://archive.ubuntu.com/ubuntu jammy/main amd64 pkg-config amd64 0.29.2-1ubuntu3 [48.2 kB] 2026-03-09T14:24:04.565 INFO:teuthology.orchestra.run.vm04.stdout:Get:67 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python-asyncssh-doc all 2.5.0-1ubuntu0.1 [309 kB] 2026-03-09T14:24:04.566 INFO:teuthology.orchestra.run.vm04.stdout:Get:68 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-iniconfig all 1.1.1-2 [6024 B] 2026-03-09T14:24:04.566 INFO:teuthology.orchestra.run.vm04.stdout:Get:69 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastescript all 2.0.2-4 [54.6 kB] 2026-03-09T14:24:04.567 INFO:teuthology.orchestra.run.vm04.stdout:Get:70 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pluggy all 0.13.0-7.1 [19.0 kB] 2026-03-09T14:24:04.576 INFO:teuthology.orchestra.run.vm04.stdout:Get:71 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-psutil amd64 5.9.0-1build1 [158 kB] 2026-03-09T14:24:04.603 INFO:teuthology.orchestra.run.vm04.stdout:Get:72 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-py all 1.10.0-1 [71.9 kB] 2026-03-09T14:24:04.605 INFO:teuthology.orchestra.run.vm04.stdout:Get:73 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-pygments all 2.11.2+dfsg-2ubuntu0.1 [750 kB] 2026-03-09T14:24:04.612 INFO:teuthology.orchestra.run.vm04.stdout:Get:74 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pyinotify all 0.9.6-1.3 [24.8 kB] 2026-03-09T14:24:04.612 INFO:teuthology.orchestra.run.vm04.stdout:Get:75 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-toml all 0.10.2-1 [16.5 kB] 2026-03-09T14:24:04.612 INFO:teuthology.orchestra.run.vm04.stdout:Get:76 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pytest all 6.2.5-1ubuntu2 [214 kB] 2026-03-09T14:24:04.613 INFO:teuthology.orchestra.run.vm04.stdout:Get:77 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplejson amd64 3.17.6-1build1 [54.7 kB] 2026-03-09T14:24:04.614 INFO:teuthology.orchestra.run.vm04.stdout:Get:78 https://archive.ubuntu.com/ubuntu jammy/universe amd64 qttranslations5-l10n all 5.15.3-1 [1983 kB] 2026-03-09T14:24:04.632 INFO:teuthology.orchestra.run.vm04.stdout:Get:79 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 smartmontools amd64 7.2-1ubuntu0.1 [583 kB] 2026-03-09T14:24:04.671 INFO:teuthology.orchestra.run.vm03.stdout:Get:29 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cherrypy3 all 18.6.1-4 [208 kB] 2026-03-09T14:24:04.672 INFO:teuthology.orchestra.run.vm03.stdout:Get:30 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-natsort all 8.0.2-1 [35.3 kB] 2026-03-09T14:24:04.672 INFO:teuthology.orchestra.run.vm03.stdout:Get:31 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-logutils all 0.3.3-8 [17.6 kB] 2026-03-09T14:24:04.672 INFO:teuthology.orchestra.run.vm03.stdout:Get:32 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-mako all 1.1.3+ds1-2ubuntu0.1 [60.5 kB] 2026-03-09T14:24:04.672 INFO:teuthology.orchestra.run.vm03.stdout:Get:33 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplegeneric all 0.8.1-3 [11.3 kB] 2026-03-09T14:24:04.673 INFO:teuthology.orchestra.run.vm03.stdout:Get:34 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-singledispatch all 3.4.0.3-3 [7320 B] 2026-03-09T14:24:04.673 INFO:teuthology.orchestra.run.vm03.stdout:Get:35 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-webob all 1:1.8.6-1.1ubuntu0.1 [86.7 kB] 2026-03-09T14:24:04.673 INFO:teuthology.orchestra.run.vm03.stdout:Get:36 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-waitress all 1.4.4-1.1ubuntu1.1 [47.0 kB] 2026-03-09T14:24:04.674 INFO:teuthology.orchestra.run.vm03.stdout:Get:37 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempita all 0.5.2-6ubuntu1 [15.1 kB] 2026-03-09T14:24:04.674 INFO:teuthology.orchestra.run.vm03.stdout:Get:38 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-paste all 3.5.0+dfsg1-1 [456 kB] 2026-03-09T14:24:04.789 INFO:teuthology.orchestra.run.vm03.stdout:Get:39 https://archive.ubuntu.com/ubuntu jammy/main amd64 python-pastedeploy-tpl all 2.1.1-1 [4892 B] 2026-03-09T14:24:04.789 INFO:teuthology.orchestra.run.vm03.stdout:Get:40 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastedeploy all 2.1.1-1 [26.6 kB] 2026-03-09T14:24:04.789 INFO:teuthology.orchestra.run.vm03.stdout:Get:41 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-webtest all 2.0.35-1 [28.5 kB] 2026-03-09T14:24:04.837 INFO:teuthology.orchestra.run.vm03.stdout:Get:42 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pecan all 1.3.3-4ubuntu2 [87.3 kB] 2026-03-09T14:24:04.838 INFO:teuthology.orchestra.run.vm03.stdout:Get:43 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-werkzeug all 2.0.2+dfsg1-1ubuntu0.22.04.3 [181 kB] 2026-03-09T14:24:04.908 INFO:teuthology.orchestra.run.vm03.stdout:Get:44 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libfuse2 amd64 2.9.9-5ubuntu3 [90.3 kB] 2026-03-09T14:24:04.908 INFO:teuthology.orchestra.run.vm03.stdout:Get:45 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python3-asyncssh all 2.5.0-1ubuntu0.1 [189 kB] 2026-03-09T14:24:04.930 INFO:teuthology.orchestra.run.vm03.stdout:Get:46 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-repoze.lru all 0.7-2 [12.1 kB] 2026-03-09T14:24:04.930 INFO:teuthology.orchestra.run.vm03.stdout:Get:47 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-routes all 2.5.1-1ubuntu1 [89.0 kB] 2026-03-09T14:24:04.930 INFO:teuthology.orchestra.run.vm03.stdout:Get:48 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn-lib amd64 0.23.2-5ubuntu6 [2058 kB] 2026-03-09T14:24:04.976 INFO:teuthology.orchestra.run.vm03.stdout:Get:49 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-joblib all 0.17.0-4ubuntu1 [204 kB] 2026-03-09T14:24:05.021 INFO:teuthology.orchestra.run.vm03.stdout:Get:50 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-threadpoolctl all 3.1.0-1 [21.3 kB] 2026-03-09T14:24:05.021 INFO:teuthology.orchestra.run.vm03.stdout:Get:51 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn all 0.23.2-5ubuntu6 [1829 kB] 2026-03-09T14:24:05.030 INFO:teuthology.orchestra.run.vm04.stdout:Get:80 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librados2 amd64 19.2.3-678-ge911bdeb-1jammy [3597 kB] 2026-03-09T14:24:05.124 INFO:teuthology.orchestra.run.vm03.stdout:Get:52 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cachetools all 5.0.0-1 [9722 B] 2026-03-09T14:24:05.124 INFO:teuthology.orchestra.run.vm03.stdout:Get:53 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-rsa all 4.8-1 [28.4 kB] 2026-03-09T14:24:05.124 INFO:teuthology.orchestra.run.vm03.stdout:Get:54 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-google-auth all 1.5.1-3 [35.7 kB] 2026-03-09T14:24:05.124 INFO:teuthology.orchestra.run.vm03.stdout:Get:55 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-requests-oauthlib all 1.3.0+ds-0.1 [18.7 kB] 2026-03-09T14:24:05.125 INFO:teuthology.orchestra.run.vm03.stdout:Get:56 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-websocket all 1.2.3-1 [34.7 kB] 2026-03-09T14:24:05.125 INFO:teuthology.orchestra.run.vm03.stdout:Get:57 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-kubernetes all 12.0.1-1ubuntu1 [353 kB] 2026-03-09T14:24:05.167 INFO:teuthology.orchestra.run.vm03.stdout:Get:58 https://archive.ubuntu.com/ubuntu jammy/main amd64 libonig5 amd64 6.9.7.1-2build1 [172 kB] 2026-03-09T14:24:05.186 INFO:teuthology.orchestra.run.vm05.stdout:Get:80 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librados2 amd64 19.2.3-678-ge911bdeb-1jammy [3597 kB] 2026-03-09T14:24:05.210 INFO:teuthology.orchestra.run.vm03.stdout:Get:59 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libjq1 amd64 1.6-2.1ubuntu3.1 [133 kB] 2026-03-09T14:24:05.210 INFO:teuthology.orchestra.run.vm03.stdout:Get:60 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 jq amd64 1.6-2.1ubuntu3.1 [52.5 kB] 2026-03-09T14:24:05.211 INFO:teuthology.orchestra.run.vm03.stdout:Get:61 https://archive.ubuntu.com/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB] 2026-03-09T14:24:05.258 INFO:teuthology.orchestra.run.vm03.stdout:Get:62 https://archive.ubuntu.com/ubuntu jammy/universe amd64 xmlstarlet amd64 1.6.1-2.1 [265 kB] 2026-03-09T14:24:05.259 INFO:teuthology.orchestra.run.vm03.stdout:Get:63 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-socket amd64 3.0~rc1+git+ac3201d-6 [78.9 kB] 2026-03-09T14:24:05.309 INFO:teuthology.orchestra.run.vm04.stdout:Get:81 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs2 amd64 19.2.3-678-ge911bdeb-1jammy [979 kB] 2026-03-09T14:24:05.350 INFO:teuthology.orchestra.run.vm03.stdout:Get:64 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-sec amd64 1.0.2-1 [37.6 kB] 2026-03-09T14:24:05.351 INFO:teuthology.orchestra.run.vm03.stdout:Get:65 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 nvme-cli amd64 1.16-3ubuntu0.3 [474 kB] 2026-03-09T14:24:05.354 INFO:teuthology.orchestra.run.vm03.stdout:Get:66 https://archive.ubuntu.com/ubuntu jammy/main amd64 pkg-config amd64 0.29.2-1ubuntu3 [48.2 kB] 2026-03-09T14:24:05.354 INFO:teuthology.orchestra.run.vm03.stdout:Get:67 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python-asyncssh-doc all 2.5.0-1ubuntu0.1 [309 kB] 2026-03-09T14:24:05.361 INFO:teuthology.orchestra.run.vm03.stdout:Get:68 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librados2 amd64 19.2.3-678-ge911bdeb-1jammy [3597 kB] 2026-03-09T14:24:05.372 INFO:teuthology.orchestra.run.vm03.stdout:Get:69 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-iniconfig all 1.1.1-2 [6024 B] 2026-03-09T14:24:05.372 INFO:teuthology.orchestra.run.vm03.stdout:Get:70 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastescript all 2.0.2-4 [54.6 kB] 2026-03-09T14:24:05.373 INFO:teuthology.orchestra.run.vm03.stdout:Get:71 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pluggy all 0.13.0-7.1 [19.0 kB] 2026-03-09T14:24:05.373 INFO:teuthology.orchestra.run.vm03.stdout:Get:72 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-psutil amd64 5.9.0-1build1 [158 kB] 2026-03-09T14:24:05.399 INFO:teuthology.orchestra.run.vm04.stdout:Get:82 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rados amd64 19.2.3-678-ge911bdeb-1jammy [357 kB] 2026-03-09T14:24:05.426 INFO:teuthology.orchestra.run.vm04.stdout:Get:83 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-argparse all 19.2.3-678-ge911bdeb-1jammy [32.9 kB] 2026-03-09T14:24:05.427 INFO:teuthology.orchestra.run.vm04.stdout:Get:84 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-cephfs amd64 19.2.3-678-ge911bdeb-1jammy [184 kB] 2026-03-09T14:24:05.428 INFO:teuthology.orchestra.run.vm04.stdout:Get:85 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-common all 19.2.3-678-ge911bdeb-1jammy [70.1 kB] 2026-03-09T14:24:05.428 INFO:teuthology.orchestra.run.vm04.stdout:Get:86 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rbd amd64 19.2.3-678-ge911bdeb-1jammy [334 kB] 2026-03-09T14:24:05.429 INFO:teuthology.orchestra.run.vm05.stdout:Get:81 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs2 amd64 19.2.3-678-ge911bdeb-1jammy [979 kB] 2026-03-09T14:24:05.431 INFO:teuthology.orchestra.run.vm04.stdout:Get:87 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librgw2 amd64 19.2.3-678-ge911bdeb-1jammy [6935 kB] 2026-03-09T14:24:05.456 INFO:teuthology.orchestra.run.vm03.stdout:Get:73 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-py all 1.10.0-1 [71.9 kB] 2026-03-09T14:24:05.488 INFO:teuthology.orchestra.run.vm03.stdout:Get:74 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-pygments all 2.11.2+dfsg-2ubuntu0.1 [750 kB] 2026-03-09T14:24:05.490 INFO:teuthology.orchestra.run.vm03.stdout:Get:75 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs2 amd64 19.2.3-678-ge911bdeb-1jammy [979 kB] 2026-03-09T14:24:05.541 INFO:teuthology.orchestra.run.vm05.stdout:Get:82 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rados amd64 19.2.3-678-ge911bdeb-1jammy [357 kB] 2026-03-09T14:24:05.545 INFO:teuthology.orchestra.run.vm05.stdout:Get:83 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-argparse all 19.2.3-678-ge911bdeb-1jammy [32.9 kB] 2026-03-09T14:24:05.546 INFO:teuthology.orchestra.run.vm05.stdout:Get:84 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-cephfs amd64 19.2.3-678-ge911bdeb-1jammy [184 kB] 2026-03-09T14:24:05.548 INFO:teuthology.orchestra.run.vm05.stdout:Get:85 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-common all 19.2.3-678-ge911bdeb-1jammy [70.1 kB] 2026-03-09T14:24:05.548 INFO:teuthology.orchestra.run.vm05.stdout:Get:86 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rbd amd64 19.2.3-678-ge911bdeb-1jammy [334 kB] 2026-03-09T14:24:05.574 INFO:teuthology.orchestra.run.vm03.stdout:Get:76 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pyinotify all 0.9.6-1.3 [24.8 kB] 2026-03-09T14:24:05.574 INFO:teuthology.orchestra.run.vm03.stdout:Get:77 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-toml all 0.10.2-1 [16.5 kB] 2026-03-09T14:24:05.574 INFO:teuthology.orchestra.run.vm03.stdout:Get:78 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pytest all 6.2.5-1ubuntu2 [214 kB] 2026-03-09T14:24:05.579 INFO:teuthology.orchestra.run.vm03.stdout:Get:79 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplejson amd64 3.17.6-1build1 [54.7 kB] 2026-03-09T14:24:05.579 INFO:teuthology.orchestra.run.vm03.stdout:Get:80 https://archive.ubuntu.com/ubuntu jammy/universe amd64 qttranslations5-l10n all 5.15.3-1 [1983 kB] 2026-03-09T14:24:05.593 INFO:teuthology.orchestra.run.vm03.stdout:Get:81 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rados amd64 19.2.3-678-ge911bdeb-1jammy [357 kB] 2026-03-09T14:24:05.596 INFO:teuthology.orchestra.run.vm03.stdout:Get:82 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-argparse all 19.2.3-678-ge911bdeb-1jammy [32.9 kB] 2026-03-09T14:24:05.596 INFO:teuthology.orchestra.run.vm03.stdout:Get:83 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-cephfs amd64 19.2.3-678-ge911bdeb-1jammy [184 kB] 2026-03-09T14:24:05.598 INFO:teuthology.orchestra.run.vm03.stdout:Get:84 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-common all 19.2.3-678-ge911bdeb-1jammy [70.1 kB] 2026-03-09T14:24:05.599 INFO:teuthology.orchestra.run.vm03.stdout:Get:85 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rbd amd64 19.2.3-678-ge911bdeb-1jammy [334 kB] 2026-03-09T14:24:05.602 INFO:teuthology.orchestra.run.vm03.stdout:Get:86 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librgw2 amd64 19.2.3-678-ge911bdeb-1jammy [6935 kB] 2026-03-09T14:24:05.650 INFO:teuthology.orchestra.run.vm05.stdout:Get:87 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librgw2 amd64 19.2.3-678-ge911bdeb-1jammy [6935 kB] 2026-03-09T14:24:05.674 INFO:teuthology.orchestra.run.vm03.stdout:Get:87 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 smartmontools amd64 7.2-1ubuntu0.1 [583 kB] 2026-03-09T14:24:05.743 INFO:teuthology.orchestra.run.vm04.stdout:Get:88 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rgw amd64 19.2.3-678-ge911bdeb-1jammy [112 kB] 2026-03-09T14:24:05.744 INFO:teuthology.orchestra.run.vm04.stdout:Get:89 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libradosstriper1 amd64 19.2.3-678-ge911bdeb-1jammy [470 kB] 2026-03-09T14:24:05.751 INFO:teuthology.orchestra.run.vm04.stdout:Get:90 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-common amd64 19.2.3-678-ge911bdeb-1jammy [26.5 MB] 2026-03-09T14:24:05.949 INFO:teuthology.orchestra.run.vm03.stdout:Get:88 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rgw amd64 19.2.3-678-ge911bdeb-1jammy [112 kB] 2026-03-09T14:24:05.949 INFO:teuthology.orchestra.run.vm03.stdout:Get:89 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libradosstriper1 amd64 19.2.3-678-ge911bdeb-1jammy [470 kB] 2026-03-09T14:24:05.955 INFO:teuthology.orchestra.run.vm03.stdout:Get:90 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-common amd64 19.2.3-678-ge911bdeb-1jammy [26.5 MB] 2026-03-09T14:24:06.113 INFO:teuthology.orchestra.run.vm05.stdout:Get:88 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rgw amd64 19.2.3-678-ge911bdeb-1jammy [112 kB] 2026-03-09T14:24:06.114 INFO:teuthology.orchestra.run.vm05.stdout:Get:89 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libradosstriper1 amd64 19.2.3-678-ge911bdeb-1jammy [470 kB] 2026-03-09T14:24:06.121 INFO:teuthology.orchestra.run.vm05.stdout:Get:90 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-common amd64 19.2.3-678-ge911bdeb-1jammy [26.5 MB] 2026-03-09T14:24:06.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:06 vm05 bash[20070]: cluster 2026-03-09T14:24:04.960061+0000 mgr.x (mgr.14150) 320 : cluster [DBG] pgmap v268: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:06.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:06 vm05 bash[20070]: cluster 2026-03-09T14:24:04.960061+0000 mgr.x (mgr.14150) 320 : cluster [DBG] pgmap v268: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:06.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:06 vm04 bash[19581]: cluster 2026-03-09T14:24:04.960061+0000 mgr.x (mgr.14150) 320 : cluster [DBG] pgmap v268: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:06.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:06 vm04 bash[19581]: cluster 2026-03-09T14:24:04.960061+0000 mgr.x (mgr.14150) 320 : cluster [DBG] pgmap v268: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:06.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:06 vm03 bash[17524]: cluster 2026-03-09T14:24:04.960061+0000 mgr.x (mgr.14150) 320 : cluster [DBG] pgmap v268: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:06.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:06 vm03 bash[17524]: cluster 2026-03-09T14:24:04.960061+0000 mgr.x (mgr.14150) 320 : cluster [DBG] pgmap v268: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:07.039 INFO:teuthology.orchestra.run.vm04.stdout:Get:91 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-base amd64 19.2.3-678-ge911bdeb-1jammy [5178 kB] 2026-03-09T14:24:07.216 INFO:teuthology.orchestra.run.vm03.stdout:Get:91 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-base amd64 19.2.3-678-ge911bdeb-1jammy [5178 kB] 2026-03-09T14:24:07.283 INFO:teuthology.orchestra.run.vm04.stdout:Get:92 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-modules-core all 19.2.3-678-ge911bdeb-1jammy [248 kB] 2026-03-09T14:24:07.287 INFO:teuthology.orchestra.run.vm04.stdout:Get:93 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libsqlite3-mod-ceph amd64 19.2.3-678-ge911bdeb-1jammy [125 kB] 2026-03-09T14:24:07.290 INFO:teuthology.orchestra.run.vm04.stdout:Get:94 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr amd64 19.2.3-678-ge911bdeb-1jammy [1081 kB] 2026-03-09T14:24:07.387 INFO:teuthology.orchestra.run.vm04.stdout:Get:95 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mon amd64 19.2.3-678-ge911bdeb-1jammy [6239 kB] 2026-03-09T14:24:07.443 INFO:teuthology.orchestra.run.vm03.stdout:Get:92 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-modules-core all 19.2.3-678-ge911bdeb-1jammy [248 kB] 2026-03-09T14:24:07.448 INFO:teuthology.orchestra.run.vm03.stdout:Get:93 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libsqlite3-mod-ceph amd64 19.2.3-678-ge911bdeb-1jammy [125 kB] 2026-03-09T14:24:07.449 INFO:teuthology.orchestra.run.vm03.stdout:Get:94 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr amd64 19.2.3-678-ge911bdeb-1jammy [1081 kB] 2026-03-09T14:24:07.466 INFO:teuthology.orchestra.run.vm03.stdout:Get:95 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mon amd64 19.2.3-678-ge911bdeb-1jammy [6239 kB] 2026-03-09T14:24:07.711 INFO:teuthology.orchestra.run.vm03.stdout:Get:96 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-osd amd64 19.2.3-678-ge911bdeb-1jammy [23.0 MB] 2026-03-09T14:24:07.733 INFO:teuthology.orchestra.run.vm04.stdout:Get:96 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-osd amd64 19.2.3-678-ge911bdeb-1jammy [23.0 MB] 2026-03-09T14:24:07.895 INFO:teuthology.orchestra.run.vm05.stdout:Get:91 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-base amd64 19.2.3-678-ge911bdeb-1jammy [5178 kB] 2026-03-09T14:24:08.053 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:24:07 vm03 bash[37744]: debug there is no tcmu-runner data available 2026-03-09T14:24:08.233 INFO:teuthology.orchestra.run.vm05.stdout:Get:92 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-modules-core all 19.2.3-678-ge911bdeb-1jammy [248 kB] 2026-03-09T14:24:08.237 INFO:teuthology.orchestra.run.vm05.stdout:Get:93 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libsqlite3-mod-ceph amd64 19.2.3-678-ge911bdeb-1jammy [125 kB] 2026-03-09T14:24:08.240 INFO:teuthology.orchestra.run.vm05.stdout:Get:94 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr amd64 19.2.3-678-ge911bdeb-1jammy [1081 kB] 2026-03-09T14:24:08.328 INFO:teuthology.orchestra.run.vm05.stdout:Get:95 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mon amd64 19.2.3-678-ge911bdeb-1jammy [6239 kB] 2026-03-09T14:24:08.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:08 vm05 bash[20070]: cluster 2026-03-09T14:24:06.960361+0000 mgr.x (mgr.14150) 321 : cluster [DBG] pgmap v269: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:08.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:08 vm05 bash[20070]: cluster 2026-03-09T14:24:06.960361+0000 mgr.x (mgr.14150) 321 : cluster [DBG] pgmap v269: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:08.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:08 vm04 bash[19581]: cluster 2026-03-09T14:24:06.960361+0000 mgr.x (mgr.14150) 321 : cluster [DBG] pgmap v269: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:08.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:08 vm04 bash[19581]: cluster 2026-03-09T14:24:06.960361+0000 mgr.x (mgr.14150) 321 : cluster [DBG] pgmap v269: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:08.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:08 vm03 bash[17524]: cluster 2026-03-09T14:24:06.960361+0000 mgr.x (mgr.14150) 321 : cluster [DBG] pgmap v269: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:08.554 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:08 vm03 bash[17524]: cluster 2026-03-09T14:24:06.960361+0000 mgr.x (mgr.14150) 321 : cluster [DBG] pgmap v269: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:08.700 INFO:teuthology.orchestra.run.vm05.stdout:Get:96 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-osd amd64 19.2.3-678-ge911bdeb-1jammy [23.0 MB] 2026-03-09T14:24:08.729 INFO:teuthology.orchestra.run.vm03.stdout:Get:97 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph amd64 19.2.3-678-ge911bdeb-1jammy [14.2 kB] 2026-03-09T14:24:08.730 INFO:teuthology.orchestra.run.vm03.stdout:Get:98 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-fuse amd64 19.2.3-678-ge911bdeb-1jammy [1173 kB] 2026-03-09T14:24:08.747 INFO:teuthology.orchestra.run.vm03.stdout:Get:99 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mds amd64 19.2.3-678-ge911bdeb-1jammy [2503 kB] 2026-03-09T14:24:08.819 INFO:teuthology.orchestra.run.vm04.stdout:Get:97 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph amd64 19.2.3-678-ge911bdeb-1jammy [14.2 kB] 2026-03-09T14:24:08.820 INFO:teuthology.orchestra.run.vm04.stdout:Get:98 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-fuse amd64 19.2.3-678-ge911bdeb-1jammy [1173 kB] 2026-03-09T14:24:08.859 INFO:teuthology.orchestra.run.vm03.stdout:Get:100 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 cephadm amd64 19.2.3-678-ge911bdeb-1jammy [798 kB] 2026-03-09T14:24:08.872 INFO:teuthology.orchestra.run.vm03.stdout:Get:101 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-cephadm all 19.2.3-678-ge911bdeb-1jammy [157 kB] 2026-03-09T14:24:08.875 INFO:teuthology.orchestra.run.vm03.stdout:Get:102 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-dashboard all 19.2.3-678-ge911bdeb-1jammy [2396 kB] 2026-03-09T14:24:08.907 INFO:teuthology.orchestra.run.vm04.stdout:Get:99 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mds amd64 19.2.3-678-ge911bdeb-1jammy [2503 kB] 2026-03-09T14:24:08.984 INFO:teuthology.orchestra.run.vm03.stdout:Get:103 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-diskprediction-local all 19.2.3-678-ge911bdeb-1jammy [8625 kB] 2026-03-09T14:24:09.008 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:24:08 vm05 bash[38699]: debug there is no tcmu-runner data available 2026-03-09T14:24:09.025 INFO:teuthology.orchestra.run.vm04.stdout:Get:100 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 cephadm amd64 19.2.3-678-ge911bdeb-1jammy [798 kB] 2026-03-09T14:24:09.043 INFO:teuthology.orchestra.run.vm04.stdout:Get:101 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-cephadm all 19.2.3-678-ge911bdeb-1jammy [157 kB] 2026-03-09T14:24:09.047 INFO:teuthology.orchestra.run.vm04.stdout:Get:102 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-dashboard all 19.2.3-678-ge911bdeb-1jammy [2396 kB] 2026-03-09T14:24:09.164 INFO:teuthology.orchestra.run.vm04.stdout:Get:103 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-diskprediction-local all 19.2.3-678-ge911bdeb-1jammy [8625 kB] 2026-03-09T14:24:09.339 INFO:teuthology.orchestra.run.vm03.stdout:Get:104 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-k8sevents all 19.2.3-678-ge911bdeb-1jammy [14.3 kB] 2026-03-09T14:24:09.339 INFO:teuthology.orchestra.run.vm03.stdout:Get:105 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-test amd64 19.2.3-678-ge911bdeb-1jammy [52.1 MB] 2026-03-09T14:24:09.615 INFO:teuthology.orchestra.run.vm04.stdout:Get:104 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-k8sevents all 19.2.3-678-ge911bdeb-1jammy [14.3 kB] 2026-03-09T14:24:09.615 INFO:teuthology.orchestra.run.vm04.stdout:Get:105 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-test amd64 19.2.3-678-ge911bdeb-1jammy [52.1 MB] 2026-03-09T14:24:09.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:09 vm05 bash[20070]: audit 2026-03-09T14:24:07.702897+0000 mgr.x (mgr.14150) 322 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:09.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:09 vm05 bash[20070]: audit 2026-03-09T14:24:07.702897+0000 mgr.x (mgr.14150) 322 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:09.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:09 vm04 bash[19581]: audit 2026-03-09T14:24:07.702897+0000 mgr.x (mgr.14150) 322 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:09.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:09 vm04 bash[19581]: audit 2026-03-09T14:24:07.702897+0000 mgr.x (mgr.14150) 322 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:09.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:09 vm03 bash[17524]: audit 2026-03-09T14:24:07.702897+0000 mgr.x (mgr.14150) 322 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:09.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:09 vm03 bash[17524]: audit 2026-03-09T14:24:07.702897+0000 mgr.x (mgr.14150) 322 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:10.206 INFO:teuthology.orchestra.run.vm05.stdout:Get:97 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph amd64 19.2.3-678-ge911bdeb-1jammy [14.2 kB] 2026-03-09T14:24:10.206 INFO:teuthology.orchestra.run.vm05.stdout:Get:98 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-fuse amd64 19.2.3-678-ge911bdeb-1jammy [1173 kB] 2026-03-09T14:24:10.215 INFO:teuthology.orchestra.run.vm05.stdout:Get:99 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mds amd64 19.2.3-678-ge911bdeb-1jammy [2503 kB] 2026-03-09T14:24:10.439 INFO:teuthology.orchestra.run.vm05.stdout:Get:100 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 cephadm amd64 19.2.3-678-ge911bdeb-1jammy [798 kB] 2026-03-09T14:24:10.482 INFO:teuthology.orchestra.run.vm05.stdout:Get:101 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-cephadm all 19.2.3-678-ge911bdeb-1jammy [157 kB] 2026-03-09T14:24:10.486 INFO:teuthology.orchestra.run.vm05.stdout:Get:102 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-dashboard all 19.2.3-678-ge911bdeb-1jammy [2396 kB] 2026-03-09T14:24:10.714 INFO:teuthology.orchestra.run.vm05.stdout:Get:103 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-diskprediction-local all 19.2.3-678-ge911bdeb-1jammy [8625 kB] 2026-03-09T14:24:10.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:10 vm05 bash[20070]: audit 2026-03-09T14:24:08.670194+0000 mgr.x (mgr.14150) 323 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:10.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:10 vm05 bash[20070]: audit 2026-03-09T14:24:08.670194+0000 mgr.x (mgr.14150) 323 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:10.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:10 vm05 bash[20070]: cluster 2026-03-09T14:24:08.960621+0000 mgr.x (mgr.14150) 324 : cluster [DBG] pgmap v270: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:10.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:10 vm05 bash[20070]: cluster 2026-03-09T14:24:08.960621+0000 mgr.x (mgr.14150) 324 : cluster [DBG] pgmap v270: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:10.759 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:10 vm04 bash[19581]: audit 2026-03-09T14:24:08.670194+0000 mgr.x (mgr.14150) 323 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:10.759 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:10 vm04 bash[19581]: audit 2026-03-09T14:24:08.670194+0000 mgr.x (mgr.14150) 323 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:10.759 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:10 vm04 bash[19581]: cluster 2026-03-09T14:24:08.960621+0000 mgr.x (mgr.14150) 324 : cluster [DBG] pgmap v270: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:10.759 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:10 vm04 bash[19581]: cluster 2026-03-09T14:24:08.960621+0000 mgr.x (mgr.14150) 324 : cluster [DBG] pgmap v270: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:10.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:10 vm03 bash[17524]: audit 2026-03-09T14:24:08.670194+0000 mgr.x (mgr.14150) 323 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:10.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:10 vm03 bash[17524]: audit 2026-03-09T14:24:08.670194+0000 mgr.x (mgr.14150) 323 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:10.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:10 vm03 bash[17524]: cluster 2026-03-09T14:24:08.960621+0000 mgr.x (mgr.14150) 324 : cluster [DBG] pgmap v270: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:10.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:10 vm03 bash[17524]: cluster 2026-03-09T14:24:08.960621+0000 mgr.x (mgr.14150) 324 : cluster [DBG] pgmap v270: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:11.435 INFO:teuthology.orchestra.run.vm05.stdout:Get:104 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-k8sevents all 19.2.3-678-ge911bdeb-1jammy [14.3 kB] 2026-03-09T14:24:11.435 INFO:teuthology.orchestra.run.vm05.stdout:Get:105 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-test amd64 19.2.3-678-ge911bdeb-1jammy [52.1 MB] 2026-03-09T14:24:11.625 INFO:teuthology.orchestra.run.vm03.stdout:Get:106 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-volume all 19.2.3-678-ge911bdeb-1jammy [135 kB] 2026-03-09T14:24:11.625 INFO:teuthology.orchestra.run.vm03.stdout:Get:107 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-dev amd64 19.2.3-678-ge911bdeb-1jammy [41.0 kB] 2026-03-09T14:24:11.625 INFO:teuthology.orchestra.run.vm03.stdout:Get:108 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 radosgw amd64 19.2.3-678-ge911bdeb-1jammy [13.7 MB] 2026-03-09T14:24:12.322 INFO:teuthology.orchestra.run.vm03.stdout:Get:109 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 rbd-fuse amd64 19.2.3-678-ge911bdeb-1jammy [92.2 kB] 2026-03-09T14:24:12.648 INFO:teuthology.orchestra.run.vm03.stdout:Fetched 178 MB in 9s (19.5 MB/s) 2026-03-09T14:24:12.662 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package liblttng-ust1:amd64. 2026-03-09T14:24:12.695 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 111717 files and directories currently installed.) 2026-03-09T14:24:12.697 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../000-liblttng-ust1_2.13.1-1ubuntu1_amd64.deb ... 2026-03-09T14:24:12.699 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T14:24:12.721 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libdouble-conversion3:amd64. 2026-03-09T14:24:12.726 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../001-libdouble-conversion3_3.1.7-4_amd64.deb ... 2026-03-09T14:24:12.727 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T14:24:12.741 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libpcre2-16-0:amd64. 2026-03-09T14:24:12.747 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../002-libpcre2-16-0_10.39-3ubuntu0.1_amd64.deb ... 2026-03-09T14:24:12.748 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T14:24:12.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:12 vm05 bash[20070]: cluster 2026-03-09T14:24:10.960897+0000 mgr.x (mgr.14150) 325 : cluster [DBG] pgmap v271: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:12.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:12 vm05 bash[20070]: cluster 2026-03-09T14:24:10.960897+0000 mgr.x (mgr.14150) 325 : cluster [DBG] pgmap v271: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:12.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:12 vm04 bash[19581]: cluster 2026-03-09T14:24:10.960897+0000 mgr.x (mgr.14150) 325 : cluster [DBG] pgmap v271: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:12.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:12 vm04 bash[19581]: cluster 2026-03-09T14:24:10.960897+0000 mgr.x (mgr.14150) 325 : cluster [DBG] pgmap v271: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:12.768 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libqt5core5a:amd64. 2026-03-09T14:24:12.774 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../003-libqt5core5a_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T14:24:12.777 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T14:24:12.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:12 vm03 bash[17524]: cluster 2026-03-09T14:24:10.960897+0000 mgr.x (mgr.14150) 325 : cluster [DBG] pgmap v271: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:12.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:12 vm03 bash[17524]: cluster 2026-03-09T14:24:10.960897+0000 mgr.x (mgr.14150) 325 : cluster [DBG] pgmap v271: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:12.810 INFO:teuthology.orchestra.run.vm04.stdout:Get:106 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-volume all 19.2.3-678-ge911bdeb-1jammy [135 kB] 2026-03-09T14:24:12.811 INFO:teuthology.orchestra.run.vm04.stdout:Get:107 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-dev amd64 19.2.3-678-ge911bdeb-1jammy [41.0 kB] 2026-03-09T14:24:12.811 INFO:teuthology.orchestra.run.vm04.stdout:Get:108 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 radosgw amd64 19.2.3-678-ge911bdeb-1jammy [13.7 MB] 2026-03-09T14:24:12.818 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libqt5dbus5:amd64. 2026-03-09T14:24:12.823 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../004-libqt5dbus5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T14:24:12.824 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T14:24:12.843 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libqt5network5:amd64. 2026-03-09T14:24:12.848 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../005-libqt5network5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T14:24:12.849 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T14:24:12.872 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libthrift-0.16.0:amd64. 2026-03-09T14:24:12.877 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../006-libthrift-0.16.0_0.16.0-2_amd64.deb ... 2026-03-09T14:24:12.879 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T14:24:12.900 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../007-librbd1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:12.903 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking librbd1 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-09T14:24:12.977 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../008-librados2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:12.979 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking librados2 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-09T14:24:13.048 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libnbd0. 2026-03-09T14:24:13.052 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../009-libnbd0_1.10.5-1_amd64.deb ... 2026-03-09T14:24:13.053 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libnbd0 (1.10.5-1) ... 2026-03-09T14:24:13.073 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libcephfs2. 2026-03-09T14:24:13.074 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../010-libcephfs2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:13.075 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:13.101 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-rados. 2026-03-09T14:24:13.106 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../011-python3-rados_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:13.106 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:13.128 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-ceph-argparse. 2026-03-09T14:24:13.133 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../012-python3-ceph-argparse_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T14:24:13.134 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:13.149 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-cephfs. 2026-03-09T14:24:13.155 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../013-python3-cephfs_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:13.156 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:13.173 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-ceph-common. 2026-03-09T14:24:13.179 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../014-python3-ceph-common_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T14:24:13.179 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:13.200 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-wcwidth. 2026-03-09T14:24:13.205 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../015-python3-wcwidth_0.2.5+dfsg1-1_all.deb ... 2026-03-09T14:24:13.206 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T14:24:13.223 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-prettytable. 2026-03-09T14:24:13.229 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../016-python3-prettytable_2.5.0-2_all.deb ... 2026-03-09T14:24:13.230 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-prettytable (2.5.0-2) ... 2026-03-09T14:24:13.244 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-rbd. 2026-03-09T14:24:13.249 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../017-python3-rbd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:13.250 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:13.269 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package librdkafka1:amd64. 2026-03-09T14:24:13.275 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../018-librdkafka1_1.8.0-1build1_amd64.deb ... 2026-03-09T14:24:13.276 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T14:24:13.298 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libreadline-dev:amd64. 2026-03-09T14:24:13.304 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../019-libreadline-dev_8.1.2-1_amd64.deb ... 2026-03-09T14:24:13.305 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T14:24:13.322 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package liblua5.3-dev:amd64. 2026-03-09T14:24:13.328 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../020-liblua5.3-dev_5.3.6-1build1_amd64.deb ... 2026-03-09T14:24:13.329 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T14:24:13.347 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package lua5.1. 2026-03-09T14:24:13.352 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../021-lua5.1_5.1.5-8.1build4_amd64.deb ... 2026-03-09T14:24:13.353 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking lua5.1 (5.1.5-8.1build4) ... 2026-03-09T14:24:13.373 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package lua-any. 2026-03-09T14:24:13.379 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../022-lua-any_27ubuntu1_all.deb ... 2026-03-09T14:24:13.379 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking lua-any (27ubuntu1) ... 2026-03-09T14:24:13.393 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package zip. 2026-03-09T14:24:13.398 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../023-zip_3.0-12build2_amd64.deb ... 2026-03-09T14:24:13.399 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking zip (3.0-12build2) ... 2026-03-09T14:24:13.418 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package unzip. 2026-03-09T14:24:13.426 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../024-unzip_6.0-26ubuntu3.2_amd64.deb ... 2026-03-09T14:24:13.427 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking unzip (6.0-26ubuntu3.2) ... 2026-03-09T14:24:13.447 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package luarocks. 2026-03-09T14:24:13.453 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../025-luarocks_3.8.0+dfsg1-1_all.deb ... 2026-03-09T14:24:13.454 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking luarocks (3.8.0+dfsg1-1) ... 2026-03-09T14:24:13.557 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package librgw2. 2026-03-09T14:24:13.561 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../026-librgw2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:13.562 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:13.598 INFO:teuthology.orchestra.run.vm04.stdout:Get:109 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 rbd-fuse amd64 19.2.3-678-ge911bdeb-1jammy [92.2 kB] 2026-03-09T14:24:13.678 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-rgw. 2026-03-09T14:24:13.684 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../027-python3-rgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:13.685 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:13.703 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package liboath0:amd64. 2026-03-09T14:24:13.709 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../028-liboath0_2.6.7-3ubuntu0.1_amd64.deb ... 2026-03-09T14:24:13.710 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T14:24:13.727 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libradosstriper1. 2026-03-09T14:24:13.732 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../029-libradosstriper1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:13.733 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:13.753 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-common. 2026-03-09T14:24:13.757 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../030-ceph-common_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:13.758 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:13.948 INFO:teuthology.orchestra.run.vm04.stdout:Fetched 178 MB in 10s (17.8 MB/s) 2026-03-09T14:24:13.989 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package liblttng-ust1:amd64. 2026-03-09T14:24:14.033 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 111717 files and directories currently installed.) 2026-03-09T14:24:14.036 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../000-liblttng-ust1_2.13.1-1ubuntu1_amd64.deb ... 2026-03-09T14:24:14.038 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T14:24:14.174 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libdouble-conversion3:amd64. 2026-03-09T14:24:14.177 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../001-libdouble-conversion3_3.1.7-4_amd64.deb ... 2026-03-09T14:24:14.178 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T14:24:14.181 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-base. 2026-03-09T14:24:14.186 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../031-ceph-base_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:14.191 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:14.191 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libpcre2-16-0:amd64. 2026-03-09T14:24:14.197 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../002-libpcre2-16-0_10.39-3ubuntu0.1_amd64.deb ... 2026-03-09T14:24:14.197 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T14:24:14.216 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libqt5core5a:amd64. 2026-03-09T14:24:14.221 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../003-libqt5core5a_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T14:24:14.224 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T14:24:14.291 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-jaraco.functools. 2026-03-09T14:24:14.296 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../032-python3-jaraco.functools_3.4.0-2_all.deb ... 2026-03-09T14:24:14.297 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-jaraco.functools (3.4.0-2) ... 2026-03-09T14:24:14.297 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libqt5dbus5:amd64. 2026-03-09T14:24:14.297 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../004-libqt5dbus5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T14:24:14.298 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T14:24:14.310 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-cheroot. 2026-03-09T14:24:14.315 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../033-python3-cheroot_8.5.2+ds1-1ubuntu3.1_all.deb ... 2026-03-09T14:24:14.316 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T14:24:14.318 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libqt5network5:amd64. 2026-03-09T14:24:14.325 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../005-libqt5network5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T14:24:14.325 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T14:24:14.332 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-jaraco.classes. 2026-03-09T14:24:14.337 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../034-python3-jaraco.classes_3.2.1-3_all.deb ... 2026-03-09T14:24:14.340 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-jaraco.classes (3.2.1-3) ... 2026-03-09T14:24:14.352 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libthrift-0.16.0:amd64. 2026-03-09T14:24:14.354 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-jaraco.text. 2026-03-09T14:24:14.357 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../006-libthrift-0.16.0_0.16.0-2_amd64.deb ... 2026-03-09T14:24:14.358 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T14:24:14.358 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../035-python3-jaraco.text_3.6.0-2_all.deb ... 2026-03-09T14:24:14.359 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-jaraco.text (3.6.0-2) ... 2026-03-09T14:24:14.375 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-jaraco.collections. 2026-03-09T14:24:14.379 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../036-python3-jaraco.collections_3.4.0-2_all.deb ... 2026-03-09T14:24:14.380 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-jaraco.collections (3.4.0-2) ... 2026-03-09T14:24:14.381 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../007-librbd1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:14.384 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking librbd1 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-09T14:24:14.393 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-tempora. 2026-03-09T14:24:14.398 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../037-python3-tempora_4.1.2-1_all.deb ... 2026-03-09T14:24:14.398 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-tempora (4.1.2-1) ... 2026-03-09T14:24:14.415 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-portend. 2026-03-09T14:24:14.420 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../038-python3-portend_3.0.0-1_all.deb ... 2026-03-09T14:24:14.421 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-portend (3.0.0-1) ... 2026-03-09T14:24:14.449 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-zc.lockfile. 2026-03-09T14:24:14.453 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../039-python3-zc.lockfile_2.0-1_all.deb ... 2026-03-09T14:24:14.453 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-zc.lockfile (2.0-1) ... 2026-03-09T14:24:14.464 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../008-librados2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:14.467 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking librados2 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-09T14:24:14.467 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-cherrypy3. 2026-03-09T14:24:14.471 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../040-python3-cherrypy3_18.6.1-4_all.deb ... 2026-03-09T14:24:14.472 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-cherrypy3 (18.6.1-4) ... 2026-03-09T14:24:14.512 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-natsort. 2026-03-09T14:24:14.517 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../041-python3-natsort_8.0.2-1_all.deb ... 2026-03-09T14:24:14.518 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-natsort (8.0.2-1) ... 2026-03-09T14:24:14.530 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libnbd0. 2026-03-09T14:24:14.532 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-logutils. 2026-03-09T14:24:14.535 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../009-libnbd0_1.10.5-1_amd64.deb ... 2026-03-09T14:24:14.536 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libnbd0 (1.10.5-1) ... 2026-03-09T14:24:14.537 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../042-python3-logutils_0.3.3-8_all.deb ... 2026-03-09T14:24:14.537 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-logutils (0.3.3-8) ... 2026-03-09T14:24:14.554 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-mako. 2026-03-09T14:24:14.555 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libcephfs2. 2026-03-09T14:24:14.559 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../043-python3-mako_1.1.3+ds1-2ubuntu0.1_all.deb ... 2026-03-09T14:24:14.560 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T14:24:14.560 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../010-libcephfs2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:14.561 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:14.585 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-simplegeneric. 2026-03-09T14:24:14.586 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../044-python3-simplegeneric_0.8.1-3_all.deb ... 2026-03-09T14:24:14.587 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-simplegeneric (0.8.1-3) ... 2026-03-09T14:24:14.591 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-rados. 2026-03-09T14:24:14.596 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../011-python3-rados_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:14.597 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:14.605 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-singledispatch. 2026-03-09T14:24:14.607 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../045-python3-singledispatch_3.4.0.3-3_all.deb ... 2026-03-09T14:24:14.608 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-singledispatch (3.4.0.3-3) ... 2026-03-09T14:24:14.620 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-ceph-argparse. 2026-03-09T14:24:14.622 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-webob. 2026-03-09T14:24:14.626 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../012-python3-ceph-argparse_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T14:24:14.626 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../046-python3-webob_1%3a1.8.6-1.1ubuntu0.1_all.deb ... 2026-03-09T14:24:14.627 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:14.627 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T14:24:14.641 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-cephfs. 2026-03-09T14:24:14.645 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-waitress. 2026-03-09T14:24:14.647 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../013-python3-cephfs_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:14.648 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:14.649 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../047-python3-waitress_1.4.4-1.1ubuntu1.1_all.deb ... 2026-03-09T14:24:14.651 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T14:24:14.665 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-ceph-common. 2026-03-09T14:24:14.667 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-tempita. 2026-03-09T14:24:14.670 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../014-python3-ceph-common_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T14:24:14.671 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:14.671 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../048-python3-tempita_0.5.2-6ubuntu1_all.deb ... 2026-03-09T14:24:14.672 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T14:24:14.687 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-paste. 2026-03-09T14:24:14.690 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-wcwidth. 2026-03-09T14:24:14.691 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../049-python3-paste_3.5.0+dfsg1-1_all.deb ... 2026-03-09T14:24:14.692 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T14:24:14.695 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../015-python3-wcwidth_0.2.5+dfsg1-1_all.deb ... 2026-03-09T14:24:14.696 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T14:24:14.715 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-prettytable. 2026-03-09T14:24:14.720 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../016-python3-prettytable_2.5.0-2_all.deb ... 2026-03-09T14:24:14.721 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-prettytable (2.5.0-2) ... 2026-03-09T14:24:14.726 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python-pastedeploy-tpl. 2026-03-09T14:24:14.731 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../050-python-pastedeploy-tpl_2.1.1-1_all.deb ... 2026-03-09T14:24:14.731 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T14:24:14.735 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-rbd. 2026-03-09T14:24:14.739 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../017-python3-rbd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:14.740 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:14.745 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pastedeploy. 2026-03-09T14:24:14.750 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../051-python3-pastedeploy_2.1.1-1_all.deb ... 2026-03-09T14:24:14.750 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pastedeploy (2.1.1-1) ... 2026-03-09T14:24:14.758 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package librdkafka1:amd64. 2026-03-09T14:24:14.762 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../018-librdkafka1_1.8.0-1build1_amd64.deb ... 2026-03-09T14:24:14.763 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T14:24:14.766 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-webtest. 2026-03-09T14:24:14.771 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../052-python3-webtest_2.0.35-1_all.deb ... 2026-03-09T14:24:14.773 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-webtest (2.0.35-1) ... 2026-03-09T14:24:14.783 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libreadline-dev:amd64. 2026-03-09T14:24:14.787 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../019-libreadline-dev_8.1.2-1_amd64.deb ... 2026-03-09T14:24:14.788 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pecan. 2026-03-09T14:24:14.788 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T14:24:14.793 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../053-python3-pecan_1.3.3-4ubuntu2_all.deb ... 2026-03-09T14:24:14.794 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T14:24:14.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:14 vm03 bash[17524]: cluster 2026-03-09T14:24:12.961162+0000 mgr.x (mgr.14150) 326 : cluster [DBG] pgmap v272: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:14.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:14 vm03 bash[17524]: cluster 2026-03-09T14:24:12.961162+0000 mgr.x (mgr.14150) 326 : cluster [DBG] pgmap v272: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:14.809 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package liblua5.3-dev:amd64. 2026-03-09T14:24:14.815 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../020-liblua5.3-dev_5.3.6-1build1_amd64.deb ... 2026-03-09T14:24:14.816 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T14:24:14.827 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-werkzeug. 2026-03-09T14:24:14.832 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../054-python3-werkzeug_2.0.2+dfsg1-1ubuntu0.22.04.3_all.deb ... 2026-03-09T14:24:14.833 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T14:24:14.837 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package lua5.1. 2026-03-09T14:24:14.844 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../021-lua5.1_5.1.5-8.1build4_amd64.deb ... 2026-03-09T14:24:14.845 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking lua5.1 (5.1.5-8.1build4) ... 2026-03-09T14:24:14.858 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mgr-modules-core. 2026-03-09T14:24:14.864 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../055-ceph-mgr-modules-core_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T14:24:14.865 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:14.865 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package lua-any. 2026-03-09T14:24:14.871 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../022-lua-any_27ubuntu1_all.deb ... 2026-03-09T14:24:14.872 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking lua-any (27ubuntu1) ... 2026-03-09T14:24:14.886 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package zip. 2026-03-09T14:24:14.892 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../023-zip_3.0-12build2_amd64.deb ... 2026-03-09T14:24:14.893 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking zip (3.0-12build2) ... 2026-03-09T14:24:14.902 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libsqlite3-mod-ceph. 2026-03-09T14:24:14.908 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../056-libsqlite3-mod-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:14.909 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:14.911 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package unzip. 2026-03-09T14:24:14.917 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../024-unzip_6.0-26ubuntu3.2_amd64.deb ... 2026-03-09T14:24:14.918 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking unzip (6.0-26ubuntu3.2) ... 2026-03-09T14:24:14.927 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mgr. 2026-03-09T14:24:14.933 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../057-ceph-mgr_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:14.934 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:14.937 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package luarocks. 2026-03-09T14:24:14.943 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../025-luarocks_3.8.0+dfsg1-1_all.deb ... 2026-03-09T14:24:14.944 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking luarocks (3.8.0+dfsg1-1) ... 2026-03-09T14:24:14.964 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mon. 2026-03-09T14:24:14.970 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../058-ceph-mon_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:14.972 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:14.992 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package librgw2. 2026-03-09T14:24:14.999 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../026-librgw2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:15.000 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:15.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:14 vm05 bash[20070]: cluster 2026-03-09T14:24:12.961162+0000 mgr.x (mgr.14150) 326 : cluster [DBG] pgmap v272: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:15.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:14 vm05 bash[20070]: cluster 2026-03-09T14:24:12.961162+0000 mgr.x (mgr.14150) 326 : cluster [DBG] pgmap v272: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:15.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:14 vm04 bash[19581]: cluster 2026-03-09T14:24:12.961162+0000 mgr.x (mgr.14150) 326 : cluster [DBG] pgmap v272: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:15.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:14 vm04 bash[19581]: cluster 2026-03-09T14:24:12.961162+0000 mgr.x (mgr.14150) 326 : cluster [DBG] pgmap v272: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:15.108 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libfuse2:amd64. 2026-03-09T14:24:15.110 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-rgw. 2026-03-09T14:24:15.115 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../059-libfuse2_2.9.9-5ubuntu3_amd64.deb ... 2026-03-09T14:24:15.116 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../027-python3-rgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:15.118 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:15.123 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T14:24:15.136 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package liboath0:amd64. 2026-03-09T14:24:15.141 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../028-liboath0_2.6.7-3ubuntu0.1_amd64.deb ... 2026-03-09T14:24:15.142 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T14:24:15.143 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-osd. 2026-03-09T14:24:15.149 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../060-ceph-osd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:15.150 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:15.159 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libradosstriper1. 2026-03-09T14:24:15.164 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../029-libradosstriper1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:15.164 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:15.189 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-common. 2026-03-09T14:24:15.195 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../030-ceph-common_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:15.196 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:15.250 INFO:teuthology.orchestra.run.vm05.stdout:Get:106 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-volume all 19.2.3-678-ge911bdeb-1jammy [135 kB] 2026-03-09T14:24:15.269 INFO:teuthology.orchestra.run.vm05.stdout:Get:107 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-dev amd64 19.2.3-678-ge911bdeb-1jammy [41.0 kB] 2026-03-09T14:24:15.270 INFO:teuthology.orchestra.run.vm05.stdout:Get:108 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 radosgw amd64 19.2.3-678-ge911bdeb-1jammy [13.7 MB] 2026-03-09T14:24:15.488 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph. 2026-03-09T14:24:15.494 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../061-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:15.558 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:15.639 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-base. 2026-03-09T14:24:15.639 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-fuse. 2026-03-09T14:24:15.645 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../062-ceph-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:15.646 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../031-ceph-base_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:15.646 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:15.650 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:15.679 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mds. 2026-03-09T14:24:15.685 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../063-ceph-mds_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:15.686 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:15.759 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package cephadm. 2026-03-09T14:24:15.762 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-jaraco.functools. 2026-03-09T14:24:15.764 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../064-cephadm_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:15.765 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:15.768 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../032-python3-jaraco.functools_3.4.0-2_all.deb ... 2026-03-09T14:24:15.768 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-jaraco.functools (3.4.0-2) ... 2026-03-09T14:24:15.784 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-cheroot. 2026-03-09T14:24:15.784 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-asyncssh. 2026-03-09T14:24:15.789 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../033-python3-cheroot_8.5.2+ds1-1ubuntu3.1_all.deb ... 2026-03-09T14:24:15.790 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../065-python3-asyncssh_2.5.0-1ubuntu0.1_all.deb ... 2026-03-09T14:24:15.790 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T14:24:15.791 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T14:24:15.811 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-jaraco.classes. 2026-03-09T14:24:15.816 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../034-python3-jaraco.classes_3.2.1-3_all.deb ... 2026-03-09T14:24:15.817 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-jaraco.classes (3.2.1-3) ... 2026-03-09T14:24:15.821 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mgr-cephadm. 2026-03-09T14:24:15.825 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../066-ceph-mgr-cephadm_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T14:24:15.826 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:15.832 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-jaraco.text. 2026-03-09T14:24:15.837 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../035-python3-jaraco.text_3.6.0-2_all.deb ... 2026-03-09T14:24:15.838 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-jaraco.text (3.6.0-2) ... 2026-03-09T14:24:15.853 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-repoze.lru. 2026-03-09T14:24:15.855 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-jaraco.collections. 2026-03-09T14:24:15.858 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../067-python3-repoze.lru_0.7-2_all.deb ... 2026-03-09T14:24:15.859 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-repoze.lru (0.7-2) ... 2026-03-09T14:24:15.862 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../036-python3-jaraco.collections_3.4.0-2_all.deb ... 2026-03-09T14:24:15.863 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-jaraco.collections (3.4.0-2) ... 2026-03-09T14:24:15.879 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-tempora. 2026-03-09T14:24:15.880 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-routes. 2026-03-09T14:24:15.884 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../037-python3-tempora_4.1.2-1_all.deb ... 2026-03-09T14:24:15.884 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../068-python3-routes_2.5.1-1ubuntu1_all.deb ... 2026-03-09T14:24:15.884 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-tempora (4.1.2-1) ... 2026-03-09T14:24:15.885 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T14:24:15.901 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-portend. 2026-03-09T14:24:15.906 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../038-python3-portend_3.0.0-1_all.deb ... 2026-03-09T14:24:15.907 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-portend (3.0.0-1) ... 2026-03-09T14:24:15.912 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mgr-dashboard. 2026-03-09T14:24:15.917 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../069-ceph-mgr-dashboard_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T14:24:15.919 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:15.922 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-zc.lockfile. 2026-03-09T14:24:15.926 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../039-python3-zc.lockfile_2.0-1_all.deb ... 2026-03-09T14:24:15.927 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-zc.lockfile (2.0-1) ... 2026-03-09T14:24:15.945 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-cherrypy3. 2026-03-09T14:24:15.949 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../040-python3-cherrypy3_18.6.1-4_all.deb ... 2026-03-09T14:24:15.950 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-cherrypy3 (18.6.1-4) ... 2026-03-09T14:24:15.982 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-natsort. 2026-03-09T14:24:15.988 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../041-python3-natsort_8.0.2-1_all.deb ... 2026-03-09T14:24:15.989 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-natsort (8.0.2-1) ... 2026-03-09T14:24:16.008 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-logutils. 2026-03-09T14:24:16.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:15 vm05 bash[20070]: cluster 2026-03-09T14:24:14.961492+0000 mgr.x (mgr.14150) 327 : cluster [DBG] pgmap v273: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:16.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:15 vm05 bash[20070]: cluster 2026-03-09T14:24:14.961492+0000 mgr.x (mgr.14150) 327 : cluster [DBG] pgmap v273: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:16.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:15 vm04 bash[19581]: cluster 2026-03-09T14:24:14.961492+0000 mgr.x (mgr.14150) 327 : cluster [DBG] pgmap v273: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:16.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:15 vm04 bash[19581]: cluster 2026-03-09T14:24:14.961492+0000 mgr.x (mgr.14150) 327 : cluster [DBG] pgmap v273: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:16.014 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../042-python3-logutils_0.3.3-8_all.deb ... 2026-03-09T14:24:16.015 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-logutils (0.3.3-8) ... 2026-03-09T14:24:16.035 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-mako. 2026-03-09T14:24:16.040 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../043-python3-mako_1.1.3+ds1-2ubuntu0.1_all.deb ... 2026-03-09T14:24:16.041 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T14:24:16.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:15 vm03 bash[17524]: cluster 2026-03-09T14:24:14.961492+0000 mgr.x (mgr.14150) 327 : cluster [DBG] pgmap v273: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:16.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:15 vm03 bash[17524]: cluster 2026-03-09T14:24:14.961492+0000 mgr.x (mgr.14150) 327 : cluster [DBG] pgmap v273: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:16.065 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-simplegeneric. 2026-03-09T14:24:16.070 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../044-python3-simplegeneric_0.8.1-3_all.deb ... 2026-03-09T14:24:16.071 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-simplegeneric (0.8.1-3) ... 2026-03-09T14:24:16.089 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-singledispatch. 2026-03-09T14:24:16.094 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../045-python3-singledispatch_3.4.0.3-3_all.deb ... 2026-03-09T14:24:16.095 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-singledispatch (3.4.0.3-3) ... 2026-03-09T14:24:16.112 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-webob. 2026-03-09T14:24:16.116 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../046-python3-webob_1%3a1.8.6-1.1ubuntu0.1_all.deb ... 2026-03-09T14:24:16.117 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T14:24:16.140 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-waitress. 2026-03-09T14:24:16.144 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../047-python3-waitress_1.4.4-1.1ubuntu1.1_all.deb ... 2026-03-09T14:24:16.146 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T14:24:16.254 INFO:teuthology.orchestra.run.vm05.stdout:Get:109 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 rbd-fuse amd64 19.2.3-678-ge911bdeb-1jammy [92.2 kB] 2026-03-09T14:24:16.273 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-tempita. 2026-03-09T14:24:16.277 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../048-python3-tempita_0.5.2-6ubuntu1_all.deb ... 2026-03-09T14:24:16.278 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T14:24:16.286 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-sklearn-lib:amd64. 2026-03-09T14:24:16.292 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../070-python3-sklearn-lib_0.23.2-5ubuntu6_amd64.deb ... 2026-03-09T14:24:16.293 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T14:24:16.296 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-paste. 2026-03-09T14:24:16.302 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../049-python3-paste_3.5.0+dfsg1-1_all.deb ... 2026-03-09T14:24:16.303 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T14:24:16.352 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python-pastedeploy-tpl. 2026-03-09T14:24:16.355 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-joblib. 2026-03-09T14:24:16.359 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../050-python-pastedeploy-tpl_2.1.1-1_all.deb ... 2026-03-09T14:24:16.359 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T14:24:16.361 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../071-python3-joblib_0.17.0-4ubuntu1_all.deb ... 2026-03-09T14:24:16.363 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T14:24:16.378 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-pastedeploy. 2026-03-09T14:24:16.383 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../051-python3-pastedeploy_2.1.1-1_all.deb ... 2026-03-09T14:24:16.384 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-pastedeploy (2.1.1-1) ... 2026-03-09T14:24:16.400 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-threadpoolctl. 2026-03-09T14:24:16.404 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-webtest. 2026-03-09T14:24:16.406 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../072-python3-threadpoolctl_3.1.0-1_all.deb ... 2026-03-09T14:24:16.407 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-threadpoolctl (3.1.0-1) ... 2026-03-09T14:24:16.411 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../052-python3-webtest_2.0.35-1_all.deb ... 2026-03-09T14:24:16.411 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-webtest (2.0.35-1) ... 2026-03-09T14:24:16.424 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-sklearn. 2026-03-09T14:24:16.430 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../073-python3-sklearn_0.23.2-5ubuntu6_all.deb ... 2026-03-09T14:24:16.431 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T14:24:16.432 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-pecan. 2026-03-09T14:24:16.438 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../053-python3-pecan_1.3.3-4ubuntu2_all.deb ... 2026-03-09T14:24:16.439 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T14:24:16.475 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-werkzeug. 2026-03-09T14:24:16.481 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../054-python3-werkzeug_2.0.2+dfsg1-1ubuntu0.22.04.3_all.deb ... 2026-03-09T14:24:16.482 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T14:24:16.510 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-mgr-modules-core. 2026-03-09T14:24:16.516 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../055-ceph-mgr-modules-core_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T14:24:16.530 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:16.571 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mgr-diskprediction-local. 2026-03-09T14:24:16.574 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libsqlite3-mod-ceph. 2026-03-09T14:24:16.577 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../074-ceph-mgr-diskprediction-local_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T14:24:16.578 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:16.579 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../056-libsqlite3-mod-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:16.580 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:16.582 INFO:teuthology.orchestra.run.vm05.stdout:Fetched 178 MB in 13s (13.8 MB/s) 2026-03-09T14:24:16.614 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-mgr. 2026-03-09T14:24:16.615 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package liblttng-ust1:amd64. 2026-03-09T14:24:16.619 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../057-ceph-mgr_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:16.621 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:16.653 INFO:teuthology.orchestra.run.vm05.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 111717 files and directories currently installed.) 2026-03-09T14:24:16.655 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../000-liblttng-ust1_2.13.1-1ubuntu1_amd64.deb ... 2026-03-09T14:24:16.656 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T14:24:16.657 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-mon. 2026-03-09T14:24:16.664 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../058-ceph-mon_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:16.665 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:16.676 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package libdouble-conversion3:amd64. 2026-03-09T14:24:16.681 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../001-libdouble-conversion3_3.1.7-4_amd64.deb ... 2026-03-09T14:24:16.682 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T14:24:16.695 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package libpcre2-16-0:amd64. 2026-03-09T14:24:16.701 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../002-libpcre2-16-0_10.39-3ubuntu0.1_amd64.deb ... 2026-03-09T14:24:16.702 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T14:24:16.718 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package libqt5core5a:amd64. 2026-03-09T14:24:16.723 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../003-libqt5core5a_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T14:24:16.749 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T14:24:16.849 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libfuse2:amd64. 2026-03-09T14:24:16.855 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../059-libfuse2_2.9.9-5ubuntu3_amd64.deb ... 2026-03-09T14:24:16.856 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T14:24:16.856 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package libqt5dbus5:amd64. 2026-03-09T14:24:16.861 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../004-libqt5dbus5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T14:24:16.862 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T14:24:16.869 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-cachetools. 2026-03-09T14:24:16.872 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../075-python3-cachetools_5.0.0-1_all.deb ... 2026-03-09T14:24:16.873 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-cachetools (5.0.0-1) ... 2026-03-09T14:24:16.878 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package libqt5network5:amd64. 2026-03-09T14:24:16.878 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-osd. 2026-03-09T14:24:16.883 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../005-libqt5network5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T14:24:16.884 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T14:24:16.885 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../060-ceph-osd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:16.886 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:16.890 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-rsa. 2026-03-09T14:24:16.897 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../076-python3-rsa_4.8-1_all.deb ... 2026-03-09T14:24:16.898 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-rsa (4.8-1) ... 2026-03-09T14:24:16.911 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package libthrift-0.16.0:amd64. 2026-03-09T14:24:16.916 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../006-libthrift-0.16.0_0.16.0-2_amd64.deb ... 2026-03-09T14:24:16.917 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T14:24:16.918 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-google-auth. 2026-03-09T14:24:16.924 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../077-python3-google-auth_1.5.1-3_all.deb ... 2026-03-09T14:24:16.933 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-google-auth (1.5.1-3) ... 2026-03-09T14:24:16.947 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../007-librbd1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:16.949 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking librbd1 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-09T14:24:16.954 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-requests-oauthlib. 2026-03-09T14:24:16.960 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../078-python3-requests-oauthlib_1.3.0+ds-0.1_all.deb ... 2026-03-09T14:24:16.961 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T14:24:16.981 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-websocket. 2026-03-09T14:24:16.987 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../079-python3-websocket_1.2.3-1_all.deb ... 2026-03-09T14:24:17.003 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-websocket (1.2.3-1) ... 2026-03-09T14:24:17.049 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../008-librados2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:17.052 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking librados2 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-09T14:24:17.052 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-kubernetes. 2026-03-09T14:24:17.059 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../080-python3-kubernetes_12.0.1-1ubuntu1_all.deb ... 2026-03-09T14:24:17.151 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T14:24:17.221 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph. 2026-03-09T14:24:17.225 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package libnbd0. 2026-03-09T14:24:17.226 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../061-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:17.227 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:17.230 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../009-libnbd0_1.10.5-1_amd64.deb ... 2026-03-09T14:24:17.231 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking libnbd0 (1.10.5-1) ... 2026-03-09T14:24:17.242 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-fuse. 2026-03-09T14:24:17.246 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package libcephfs2. 2026-03-09T14:24:17.247 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../062-ceph-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:17.248 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:17.251 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../010-libcephfs2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:17.252 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:17.300 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-rados. 2026-03-09T14:24:17.303 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-mds. 2026-03-09T14:24:17.306 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../011-python3-rados_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:17.307 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:17.308 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../063-ceph-mds_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:17.309 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:17.328 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-ceph-argparse. 2026-03-09T14:24:17.334 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../012-python3-ceph-argparse_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T14:24:17.344 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:17.349 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mgr-k8sevents. 2026-03-09T14:24:17.352 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../081-ceph-mgr-k8sevents_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T14:24:17.353 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:17.359 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package cephadm. 2026-03-09T14:24:17.359 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-cephfs. 2026-03-09T14:24:17.363 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../064-cephadm_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:17.364 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:17.364 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../013-python3-cephfs_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:17.365 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:17.373 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libonig5:amd64. 2026-03-09T14:24:17.379 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../082-libonig5_6.9.7.1-2build1_amd64.deb ... 2026-03-09T14:24:17.380 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T14:24:17.383 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-ceph-common. 2026-03-09T14:24:17.386 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-asyncssh. 2026-03-09T14:24:17.388 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../014-python3-ceph-common_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T14:24:17.389 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:17.391 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../065-python3-asyncssh_2.5.0-1ubuntu0.1_all.deb ... 2026-03-09T14:24:17.392 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T14:24:17.398 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libjq1:amd64. 2026-03-09T14:24:17.405 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../083-libjq1_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-09T14:24:17.406 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T14:24:17.412 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-wcwidth. 2026-03-09T14:24:17.418 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../015-python3-wcwidth_0.2.5+dfsg1-1_all.deb ... 2026-03-09T14:24:17.419 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T14:24:17.421 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-mgr-cephadm. 2026-03-09T14:24:17.423 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package jq. 2026-03-09T14:24:17.427 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../066-ceph-mgr-cephadm_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T14:24:17.428 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:17.429 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../084-jq_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-09T14:24:17.430 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking jq (1.6-2.1ubuntu3.1) ... 2026-03-09T14:24:17.440 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-prettytable. 2026-03-09T14:24:17.445 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../016-python3-prettytable_2.5.0-2_all.deb ... 2026-03-09T14:24:17.446 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package socat. 2026-03-09T14:24:17.446 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-prettytable (2.5.0-2) ... 2026-03-09T14:24:17.451 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../085-socat_1.7.4.1-3ubuntu4_amd64.deb ... 2026-03-09T14:24:17.452 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking socat (1.7.4.1-3ubuntu4) ... 2026-03-09T14:24:17.453 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-repoze.lru. 2026-03-09T14:24:17.460 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../067-python3-repoze.lru_0.7-2_all.deb ... 2026-03-09T14:24:17.461 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-repoze.lru (0.7-2) ... 2026-03-09T14:24:17.465 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-rbd. 2026-03-09T14:24:17.472 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../017-python3-rbd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:17.472 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:17.478 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-routes. 2026-03-09T14:24:17.479 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package xmlstarlet. 2026-03-09T14:24:17.484 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../068-python3-routes_2.5.1-1ubuntu1_all.deb ... 2026-03-09T14:24:17.485 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../086-xmlstarlet_1.6.1-2.1_amd64.deb ... 2026-03-09T14:24:17.485 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T14:24:17.486 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking xmlstarlet (1.6.1-2.1) ... 2026-03-09T14:24:17.496 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package librdkafka1:amd64. 2026-03-09T14:24:17.501 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../018-librdkafka1_1.8.0-1build1_amd64.deb ... 2026-03-09T14:24:17.502 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T14:24:17.509 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-mgr-dashboard. 2026-03-09T14:24:17.516 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../069-ceph-mgr-dashboard_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T14:24:17.517 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:17.526 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package libreadline-dev:amd64. 2026-03-09T14:24:17.532 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../019-libreadline-dev_8.1.2-1_amd64.deb ... 2026-03-09T14:24:17.533 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T14:24:17.534 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-test. 2026-03-09T14:24:17.540 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../087-ceph-test_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:17.541 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:17.552 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package liblua5.3-dev:amd64. 2026-03-09T14:24:17.558 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../020-liblua5.3-dev_5.3.6-1build1_amd64.deb ... 2026-03-09T14:24:17.559 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T14:24:17.582 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package lua5.1. 2026-03-09T14:24:17.590 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../021-lua5.1_5.1.5-8.1build4_amd64.deb ... 2026-03-09T14:24:17.591 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking lua5.1 (5.1.5-8.1build4) ... 2026-03-09T14:24:17.648 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package lua-any. 2026-03-09T14:24:17.655 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../022-lua-any_27ubuntu1_all.deb ... 2026-03-09T14:24:17.657 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking lua-any (27ubuntu1) ... 2026-03-09T14:24:17.671 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package zip. 2026-03-09T14:24:17.677 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../023-zip_3.0-12build2_amd64.deb ... 2026-03-09T14:24:17.678 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking zip (3.0-12build2) ... 2026-03-09T14:24:17.699 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package unzip. 2026-03-09T14:24:17.706 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../024-unzip_6.0-26ubuntu3.2_amd64.deb ... 2026-03-09T14:24:17.707 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking unzip (6.0-26ubuntu3.2) ... 2026-03-09T14:24:17.728 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package luarocks. 2026-03-09T14:24:17.734 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../025-luarocks_3.8.0+dfsg1-1_all.deb ... 2026-03-09T14:24:17.735 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking luarocks (3.8.0+dfsg1-1) ... 2026-03-09T14:24:17.849 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package librgw2. 2026-03-09T14:24:17.855 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../026-librgw2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:17.856 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:18.012 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-sklearn-lib:amd64. 2026-03-09T14:24:18.018 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../070-python3-sklearn-lib_0.23.2-5ubuntu6_amd64.deb ... 2026-03-09T14:24:18.027 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T14:24:18.037 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-rgw. 2026-03-09T14:24:18.043 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../027-python3-rgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:18.053 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:24:17 vm03 bash[37744]: debug there is no tcmu-runner data available 2026-03-09T14:24:18.062 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:18.186 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package liboath0:amd64. 2026-03-09T14:24:18.192 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../028-liboath0_2.6.7-3ubuntu0.1_amd64.deb ... 2026-03-09T14:24:18.192 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-joblib. 2026-03-09T14:24:18.192 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T14:24:18.197 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-volume. 2026-03-09T14:24:18.198 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../071-python3-joblib_0.17.0-4ubuntu1_all.deb ... 2026-03-09T14:24:18.199 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T14:24:18.203 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../088-ceph-volume_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T14:24:18.204 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:18.208 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package libradosstriper1. 2026-03-09T14:24:18.214 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../029-libradosstriper1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:18.215 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:18.234 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-threadpoolctl. 2026-03-09T14:24:18.236 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libcephfs-dev. 2026-03-09T14:24:18.238 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package ceph-common. 2026-03-09T14:24:18.239 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../072-python3-threadpoolctl_3.1.0-1_all.deb ... 2026-03-09T14:24:18.240 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-threadpoolctl (3.1.0-1) ... 2026-03-09T14:24:18.242 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../089-libcephfs-dev_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:18.243 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:18.244 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../030-ceph-common_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:18.245 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:18.260 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-sklearn. 2026-03-09T14:24:18.261 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package lua-socket:amd64. 2026-03-09T14:24:18.266 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../073-python3-sklearn_0.23.2-5ubuntu6_all.deb ... 2026-03-09T14:24:18.267 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T14:24:18.268 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../090-lua-socket_3.0~rc1+git+ac3201d-6_amd64.deb ... 2026-03-09T14:24:18.269 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T14:24:18.296 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package lua-sec:amd64. 2026-03-09T14:24:18.302 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../091-lua-sec_1.0.2-1_amd64.deb ... 2026-03-09T14:24:18.304 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking lua-sec:amd64 (1.0.2-1) ... 2026-03-09T14:24:18.326 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package nvme-cli. 2026-03-09T14:24:18.333 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../092-nvme-cli_1.16-3ubuntu0.3_amd64.deb ... 2026-03-09T14:24:18.334 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T14:24:18.383 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package pkg-config. 2026-03-09T14:24:18.389 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../093-pkg-config_0.29.2-1ubuntu3_amd64.deb ... 2026-03-09T14:24:18.390 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T14:24:18.402 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-mgr-diskprediction-local. 2026-03-09T14:24:18.408 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python-asyncssh-doc. 2026-03-09T14:24:18.409 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../074-ceph-mgr-diskprediction-local_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T14:24:18.410 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:18.414 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../094-python-asyncssh-doc_2.5.0-1ubuntu0.1_all.deb ... 2026-03-09T14:24:18.415 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T14:24:18.462 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-iniconfig. 2026-03-09T14:24:18.470 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../095-python3-iniconfig_1.1.1-2_all.deb ... 2026-03-09T14:24:18.471 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-iniconfig (1.1.1-2) ... 2026-03-09T14:24:18.489 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pastescript. 2026-03-09T14:24:18.496 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../096-python3-pastescript_2.0.2-4_all.deb ... 2026-03-09T14:24:18.499 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pastescript (2.0.2-4) ... 2026-03-09T14:24:18.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:18 vm05 bash[20070]: cluster 2026-03-09T14:24:16.961804+0000 mgr.x (mgr.14150) 328 : cluster [DBG] pgmap v274: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:18.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:18 vm05 bash[20070]: cluster 2026-03-09T14:24:16.961804+0000 mgr.x (mgr.14150) 328 : cluster [DBG] pgmap v274: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:18.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:18 vm04 bash[19581]: cluster 2026-03-09T14:24:16.961804+0000 mgr.x (mgr.14150) 328 : cluster [DBG] pgmap v274: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:18.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:18 vm04 bash[19581]: cluster 2026-03-09T14:24:16.961804+0000 mgr.x (mgr.14150) 328 : cluster [DBG] pgmap v274: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:18.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:18 vm03 bash[17524]: cluster 2026-03-09T14:24:16.961804+0000 mgr.x (mgr.14150) 328 : cluster [DBG] pgmap v274: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:18.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:18 vm03 bash[17524]: cluster 2026-03-09T14:24:16.961804+0000 mgr.x (mgr.14150) 328 : cluster [DBG] pgmap v274: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:18.713 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pluggy. 2026-03-09T14:24:18.719 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package ceph-base. 2026-03-09T14:24:18.720 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../097-python3-pluggy_0.13.0-7.1_all.deb ... 2026-03-09T14:24:18.721 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pluggy (0.13.0-7.1) ... 2026-03-09T14:24:18.722 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-cachetools. 2026-03-09T14:24:18.726 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../031-ceph-base_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:18.728 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../075-python3-cachetools_5.0.0-1_all.deb ... 2026-03-09T14:24:18.729 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-cachetools (5.0.0-1) ... 2026-03-09T14:24:18.731 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:18.738 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-psutil. 2026-03-09T14:24:18.744 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../098-python3-psutil_5.9.0-1build1_amd64.deb ... 2026-03-09T14:24:18.745 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-psutil (5.9.0-1build1) ... 2026-03-09T14:24:18.750 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-rsa. 2026-03-09T14:24:18.757 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../076-python3-rsa_4.8-1_all.deb ... 2026-03-09T14:24:18.758 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-rsa (4.8-1) ... 2026-03-09T14:24:18.765 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-py. 2026-03-09T14:24:18.771 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../099-python3-py_1.10.0-1_all.deb ... 2026-03-09T14:24:18.772 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-py (1.10.0-1) ... 2026-03-09T14:24:18.782 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-google-auth. 2026-03-09T14:24:18.789 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../077-python3-google-auth_1.5.1-3_all.deb ... 2026-03-09T14:24:18.790 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-google-auth (1.5.1-3) ... 2026-03-09T14:24:18.823 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pygments. 2026-03-09T14:24:18.829 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../100-python3-pygments_2.11.2+dfsg-2ubuntu0.1_all.deb ... 2026-03-09T14:24:18.830 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-09T14:24:18.836 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-requests-oauthlib. 2026-03-09T14:24:18.840 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-jaraco.functools. 2026-03-09T14:24:18.843 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../078-python3-requests-oauthlib_1.3.0+ds-0.1_all.deb ... 2026-03-09T14:24:18.844 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T14:24:18.846 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../032-python3-jaraco.functools_3.4.0-2_all.deb ... 2026-03-09T14:24:18.847 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-jaraco.functools (3.4.0-2) ... 2026-03-09T14:24:18.863 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-cheroot. 2026-03-09T14:24:18.868 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../033-python3-cheroot_8.5.2+ds1-1ubuntu3.1_all.deb ... 2026-03-09T14:24:18.872 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T14:24:18.873 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-websocket. 2026-03-09T14:24:18.879 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../079-python3-websocket_1.2.3-1_all.deb ... 2026-03-09T14:24:18.881 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-websocket (1.2.3-1) ... 2026-03-09T14:24:18.893 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-jaraco.classes. 2026-03-09T14:24:18.893 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pyinotify. 2026-03-09T14:24:18.899 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../034-python3-jaraco.classes_3.2.1-3_all.deb ... 2026-03-09T14:24:18.900 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../101-python3-pyinotify_0.9.6-1.3_all.deb ... 2026-03-09T14:24:18.900 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-jaraco.classes (3.2.1-3) ... 2026-03-09T14:24:18.901 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pyinotify (0.9.6-1.3) ... 2026-03-09T14:24:18.904 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-kubernetes. 2026-03-09T14:24:18.910 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../080-python3-kubernetes_12.0.1-1ubuntu1_all.deb ... 2026-03-09T14:24:18.919 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-jaraco.text. 2026-03-09T14:24:18.919 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-toml. 2026-03-09T14:24:18.923 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T14:24:18.925 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../035-python3-jaraco.text_3.6.0-2_all.deb ... 2026-03-09T14:24:18.925 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../102-python3-toml_0.10.2-1_all.deb ... 2026-03-09T14:24:18.926 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-jaraco.text (3.6.0-2) ... 2026-03-09T14:24:18.927 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-toml (0.10.2-1) ... 2026-03-09T14:24:18.943 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pytest. 2026-03-09T14:24:18.946 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-jaraco.collections. 2026-03-09T14:24:18.949 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../103-python3-pytest_6.2.5-1ubuntu2_all.deb ... 2026-03-09T14:24:18.950 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pytest (6.2.5-1ubuntu2) ... 2026-03-09T14:24:18.951 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../036-python3-jaraco.collections_3.4.0-2_all.deb ... 2026-03-09T14:24:18.952 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-jaraco.collections (3.4.0-2) ... 2026-03-09T14:24:18.970 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-tempora. 2026-03-09T14:24:18.975 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../037-python3-tempora_4.1.2-1_all.deb ... 2026-03-09T14:24:18.976 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-tempora (4.1.2-1) ... 2026-03-09T14:24:18.982 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-simplejson. 2026-03-09T14:24:18.987 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../104-python3-simplejson_3.17.6-1build1_amd64.deb ... 2026-03-09T14:24:18.988 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-simplejson (3.17.6-1build1) ... 2026-03-09T14:24:18.996 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-portend. 2026-03-09T14:24:19.002 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../038-python3-portend_3.0.0-1_all.deb ... 2026-03-09T14:24:19.003 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-portend (3.0.0-1) ... 2026-03-09T14:24:19.006 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package qttranslations5-l10n. 2026-03-09T14:24:19.008 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:24:18 vm05 bash[38699]: debug there is no tcmu-runner data available 2026-03-09T14:24:19.012 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../105-qttranslations5-l10n_5.15.3-1_all.deb ... 2026-03-09T14:24:19.013 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking qttranslations5-l10n (5.15.3-1) ... 2026-03-09T14:24:19.040 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-zc.lockfile. 2026-03-09T14:24:19.045 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../039-python3-zc.lockfile_2.0-1_all.deb ... 2026-03-09T14:24:19.046 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-zc.lockfile (2.0-1) ... 2026-03-09T14:24:19.068 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-cherrypy3. 2026-03-09T14:24:19.073 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../040-python3-cherrypy3_18.6.1-4_all.deb ... 2026-03-09T14:24:19.094 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-cherrypy3 (18.6.1-4) ... 2026-03-09T14:24:19.110 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-mgr-k8sevents. 2026-03-09T14:24:19.116 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../081-ceph-mgr-k8sevents_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T14:24:19.117 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:19.124 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package radosgw. 2026-03-09T14:24:19.125 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-natsort. 2026-03-09T14:24:19.130 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../041-python3-natsort_8.0.2-1_all.deb ... 2026-03-09T14:24:19.130 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../106-radosgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:19.131 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-natsort (8.0.2-1) ... 2026-03-09T14:24:19.131 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:19.133 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libonig5:amd64. 2026-03-09T14:24:19.139 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../082-libonig5_6.9.7.1-2build1_amd64.deb ... 2026-03-09T14:24:19.140 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T14:24:19.147 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-logutils. 2026-03-09T14:24:19.152 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../042-python3-logutils_0.3.3-8_all.deb ... 2026-03-09T14:24:19.153 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-logutils (0.3.3-8) ... 2026-03-09T14:24:19.158 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libjq1:amd64. 2026-03-09T14:24:19.164 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../083-libjq1_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-09T14:24:19.166 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T14:24:19.167 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-mako. 2026-03-09T14:24:19.171 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../043-python3-mako_1.1.3+ds1-2ubuntu0.1_all.deb ... 2026-03-09T14:24:19.172 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T14:24:19.187 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package jq. 2026-03-09T14:24:19.194 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../084-jq_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-09T14:24:19.195 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking jq (1.6-2.1ubuntu3.1) ... 2026-03-09T14:24:19.197 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-simplegeneric. 2026-03-09T14:24:19.199 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../044-python3-simplegeneric_0.8.1-3_all.deb ... 2026-03-09T14:24:19.199 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-simplegeneric (0.8.1-3) ... 2026-03-09T14:24:19.212 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-singledispatch. 2026-03-09T14:24:19.213 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package socat. 2026-03-09T14:24:19.217 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../045-python3-singledispatch_3.4.0.3-3_all.deb ... 2026-03-09T14:24:19.218 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-singledispatch (3.4.0.3-3) ... 2026-03-09T14:24:19.220 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../085-socat_1.7.4.1-3ubuntu4_amd64.deb ... 2026-03-09T14:24:19.221 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking socat (1.7.4.1-3ubuntu4) ... 2026-03-09T14:24:19.231 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-webob. 2026-03-09T14:24:19.236 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../046-python3-webob_1%3a1.8.6-1.1ubuntu0.1_all.deb ... 2026-03-09T14:24:19.236 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T14:24:19.247 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package xmlstarlet. 2026-03-09T14:24:19.253 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-waitress. 2026-03-09T14:24:19.254 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../086-xmlstarlet_1.6.1-2.1_amd64.deb ... 2026-03-09T14:24:19.255 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking xmlstarlet (1.6.1-2.1) ... 2026-03-09T14:24:19.257 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../047-python3-waitress_1.4.4-1.1ubuntu1.1_all.deb ... 2026-03-09T14:24:19.259 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T14:24:19.278 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-tempita. 2026-03-09T14:24:19.283 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../048-python3-tempita_0.5.2-6ubuntu1_all.deb ... 2026-03-09T14:24:19.285 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T14:24:19.354 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-paste. 2026-03-09T14:24:19.358 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-test. 2026-03-09T14:24:19.358 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../049-python3-paste_3.5.0+dfsg1-1_all.deb ... 2026-03-09T14:24:19.359 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T14:24:19.364 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../087-ceph-test_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:19.365 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package rbd-fuse. 2026-03-09T14:24:19.365 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:19.367 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../107-rbd-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:19.368 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:19.386 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package smartmontools. 2026-03-09T14:24:19.390 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python-pastedeploy-tpl. 2026-03-09T14:24:19.392 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../108-smartmontools_7.2-1ubuntu0.1_amd64.deb ... 2026-03-09T14:24:19.395 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../050-python-pastedeploy-tpl_2.1.1-1_all.deb ... 2026-03-09T14:24:19.396 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T14:24:19.400 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T14:24:19.414 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-pastedeploy. 2026-03-09T14:24:19.420 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../051-python3-pastedeploy_2.1.1-1_all.deb ... 2026-03-09T14:24:19.421 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-pastedeploy (2.1.1-1) ... 2026-03-09T14:24:19.441 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-webtest. 2026-03-09T14:24:19.446 INFO:teuthology.orchestra.run.vm03.stdout:Setting up smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T14:24:19.446 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../052-python3-webtest_2.0.35-1_all.deb ... 2026-03-09T14:24:19.447 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-webtest (2.0.35-1) ... 2026-03-09T14:24:19.465 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-pecan. 2026-03-09T14:24:19.469 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:19 vm03 bash[17524]: audit 2026-03-09T14:24:17.711977+0000 mgr.x (mgr.14150) 329 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:19.469 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:19 vm03 bash[17524]: audit 2026-03-09T14:24:17.711977+0000 mgr.x (mgr.14150) 329 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:19.471 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../053-python3-pecan_1.3.3-4ubuntu2_all.deb ... 2026-03-09T14:24:19.473 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T14:24:19.505 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-werkzeug. 2026-03-09T14:24:19.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:19 vm05 bash[20070]: audit 2026-03-09T14:24:17.711977+0000 mgr.x (mgr.14150) 329 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:19.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:19 vm05 bash[20070]: audit 2026-03-09T14:24:17.711977+0000 mgr.x (mgr.14150) 329 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:19.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:19 vm04 bash[19581]: audit 2026-03-09T14:24:17.711977+0000 mgr.x (mgr.14150) 329 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:19.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:19 vm04 bash[19581]: audit 2026-03-09T14:24:17.711977+0000 mgr.x (mgr.14150) 329 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:19.512 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../054-python3-werkzeug_2.0.2+dfsg1-1ubuntu0.22.04.3_all.deb ... 2026-03-09T14:24:19.513 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T14:24:19.545 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package ceph-mgr-modules-core. 2026-03-09T14:24:19.552 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../055-ceph-mgr-modules-core_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T14:24:19.553 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:19.597 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package libsqlite3-mod-ceph. 2026-03-09T14:24:19.603 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../056-libsqlite3-mod-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:19.604 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:19.623 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package ceph-mgr. 2026-03-09T14:24:19.628 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../057-ceph-mgr_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:19.629 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:19.670 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package ceph-mon. 2026-03-09T14:24:19.670 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../058-ceph-mon_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:19.671 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:19.736 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/smartd.service → /lib/systemd/system/smartmontools.service. 2026-03-09T14:24:19.736 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/smartmontools.service → /lib/systemd/system/smartmontools.service. 2026-03-09T14:24:19.737 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:19 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:19.737 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:24:19 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:19.737 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:24:19 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:19.737 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:24:19 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:19.737 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:24:19 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:19.771 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package libfuse2:amd64. 2026-03-09T14:24:19.777 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../059-libfuse2_2.9.9-5ubuntu3_amd64.deb ... 2026-03-09T14:24:19.778 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T14:24:19.798 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package ceph-osd. 2026-03-09T14:24:19.806 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../060-ceph-osd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:19.807 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:20.031 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:19 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:20.031 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:24:19 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:20.031 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:24:19 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:20.031 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:24:19 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:20.031 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:24:19 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:20.203 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-iniconfig (1.1.1-2) ... 2026-03-09T14:24:20.253 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package ceph. 2026-03-09T14:24:20.257 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-volume. 2026-03-09T14:24:20.259 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../061-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:20.260 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:20.263 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../088-ceph-volume_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T14:24:20.264 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:20.280 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package ceph-fuse. 2026-03-09T14:24:20.288 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../062-ceph-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:20.289 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:20.296 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libcephfs-dev. 2026-03-09T14:24:20.304 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:20 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:20.304 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:20 vm03 bash[17524]: audit 2026-03-09T14:24:18.681090+0000 mgr.x (mgr.14150) 330 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:20.304 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:20 vm03 bash[17524]: audit 2026-03-09T14:24:18.681090+0000 mgr.x (mgr.14150) 330 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:20.304 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:20 vm03 bash[17524]: cluster 2026-03-09T14:24:18.962085+0000 mgr.x (mgr.14150) 331 : cluster [DBG] pgmap v275: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:20.304 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:20 vm03 bash[17524]: cluster 2026-03-09T14:24:18.962085+0000 mgr.x (mgr.14150) 331 : cluster [DBG] pgmap v275: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:20.304 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:24:20 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:20.304 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:24:20 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:20.304 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:24:20 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:20.304 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:24:20 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:20.305 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../089-libcephfs-dev_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:20.310 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:20.317 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T14:24:20.320 INFO:teuthology.orchestra.run.vm03.stdout:Setting up nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T14:24:20.325 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package ceph-mds. 2026-03-09T14:24:20.330 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package lua-socket:amd64. 2026-03-09T14:24:20.331 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../063-ceph-mds_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:20.332 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:20.337 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../090-lua-socket_3.0~rc1+git+ac3201d-6_amd64.deb ... 2026-03-09T14:24:20.338 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T14:24:20.377 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package lua-sec:amd64. 2026-03-09T14:24:20.383 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package cephadm. 2026-03-09T14:24:20.383 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../091-lua-sec_1.0.2-1_amd64.deb ... 2026-03-09T14:24:20.385 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking lua-sec:amd64 (1.0.2-1) ... 2026-03-09T14:24:20.389 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../064-cephadm_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:20.390 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:20.393 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /lib/systemd/system/nvmefc-boot-connections.service. 2026-03-09T14:24:20.406 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package nvme-cli. 2026-03-09T14:24:20.411 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-asyncssh. 2026-03-09T14:24:20.412 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../092-nvme-cli_1.16-3ubuntu0.3_amd64.deb ... 2026-03-09T14:24:20.413 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T14:24:20.417 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../065-python3-asyncssh_2.5.0-1ubuntu0.1_all.deb ... 2026-03-09T14:24:20.418 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T14:24:20.448 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package ceph-mgr-cephadm. 2026-03-09T14:24:20.454 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../066-ceph-mgr-cephadm_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T14:24:20.455 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:20.459 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package pkg-config. 2026-03-09T14:24:20.462 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../093-pkg-config_0.29.2-1ubuntu3_amd64.deb ... 2026-03-09T14:24:20.463 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T14:24:20.480 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python-asyncssh-doc. 2026-03-09T14:24:20.482 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-repoze.lru. 2026-03-09T14:24:20.486 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../094-python-asyncssh-doc_2.5.0-1ubuntu0.1_all.deb ... 2026-03-09T14:24:20.487 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T14:24:20.489 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../067-python3-repoze.lru_0.7-2_all.deb ... 2026-03-09T14:24:20.490 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-repoze.lru (0.7-2) ... 2026-03-09T14:24:20.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:20 vm04 bash[19581]: audit 2026-03-09T14:24:18.681090+0000 mgr.x (mgr.14150) 330 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:20.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:20 vm04 bash[19581]: audit 2026-03-09T14:24:18.681090+0000 mgr.x (mgr.14150) 330 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:20.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:20 vm04 bash[19581]: cluster 2026-03-09T14:24:18.962085+0000 mgr.x (mgr.14150) 331 : cluster [DBG] pgmap v275: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:20.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:20 vm04 bash[19581]: cluster 2026-03-09T14:24:18.962085+0000 mgr.x (mgr.14150) 331 : cluster [DBG] pgmap v275: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:20.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:20 vm05 bash[20070]: audit 2026-03-09T14:24:18.681090+0000 mgr.x (mgr.14150) 330 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:20.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:20 vm05 bash[20070]: audit 2026-03-09T14:24:18.681090+0000 mgr.x (mgr.14150) 330 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:20.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:20 vm05 bash[20070]: cluster 2026-03-09T14:24:18.962085+0000 mgr.x (mgr.14150) 331 : cluster [DBG] pgmap v275: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:20.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:20 vm05 bash[20070]: cluster 2026-03-09T14:24:18.962085+0000 mgr.x (mgr.14150) 331 : cluster [DBG] pgmap v275: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:20.514 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-routes. 2026-03-09T14:24:20.520 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../068-python3-routes_2.5.1-1ubuntu1_all.deb ... 2026-03-09T14:24:20.521 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T14:24:20.530 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-iniconfig. 2026-03-09T14:24:20.536 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../095-python3-iniconfig_1.1.1-2_all.deb ... 2026-03-09T14:24:20.537 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-iniconfig (1.1.1-2) ... 2026-03-09T14:24:20.549 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package ceph-mgr-dashboard. 2026-03-09T14:24:20.554 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-pastescript. 2026-03-09T14:24:20.554 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../069-ceph-mgr-dashboard_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T14:24:20.555 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:20.559 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../096-python3-pastescript_2.0.2-4_all.deb ... 2026-03-09T14:24:20.560 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-pastescript (2.0.2-4) ... 2026-03-09T14:24:20.581 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-pluggy. 2026-03-09T14:24:20.589 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../097-python3-pluggy_0.13.0-7.1_all.deb ... 2026-03-09T14:24:20.590 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-pluggy (0.13.0-7.1) ... 2026-03-09T14:24:20.609 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-psutil. 2026-03-09T14:24:20.616 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../098-python3-psutil_5.9.0-1build1_amd64.deb ... 2026-03-09T14:24:20.617 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-psutil (5.9.0-1build1) ... 2026-03-09T14:24:20.642 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-py. 2026-03-09T14:24:20.649 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../099-python3-py_1.10.0-1_all.deb ... 2026-03-09T14:24:20.650 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-py (1.10.0-1) ... 2026-03-09T14:24:20.652 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:20 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:20.652 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:24:20 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:20.653 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:24:20 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:20.653 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:24:20 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:20.653 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:24:20 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:20.653 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmf-autoconnect.service → /lib/systemd/system/nvmf-autoconnect.service. 2026-03-09T14:24:20.680 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-pygments. 2026-03-09T14:24:20.686 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../100-python3-pygments_2.11.2+dfsg-2ubuntu0.1_all.deb ... 2026-03-09T14:24:20.687 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-09T14:24:20.748 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-pyinotify. 2026-03-09T14:24:20.754 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../101-python3-pyinotify_0.9.6-1.3_all.deb ... 2026-03-09T14:24:20.756 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-pyinotify (0.9.6-1.3) ... 2026-03-09T14:24:20.774 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-toml. 2026-03-09T14:24:20.780 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../102-python3-toml_0.10.2-1_all.deb ... 2026-03-09T14:24:20.780 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-toml (0.10.2-1) ... 2026-03-09T14:24:20.905 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-pytest. 2026-03-09T14:24:20.912 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../103-python3-pytest_6.2.5-1ubuntu2_all.deb ... 2026-03-09T14:24:20.913 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-pytest (6.2.5-1ubuntu2) ... 2026-03-09T14:24:20.924 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-sklearn-lib:amd64. 2026-03-09T14:24:20.930 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../070-python3-sklearn-lib_0.23.2-5ubuntu6_amd64.deb ... 2026-03-09T14:24:20.931 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T14:24:20.942 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-simplejson. 2026-03-09T14:24:20.948 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../104-python3-simplejson_3.17.6-1build1_amd64.deb ... 2026-03-09T14:24:20.949 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-simplejson (3.17.6-1build1) ... 2026-03-09T14:24:20.976 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package qttranslations5-l10n. 2026-03-09T14:24:20.979 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:20 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:20.979 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:24:20 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:20.980 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:24:20 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:20.980 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:24:20 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:20.980 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:24:20 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:20.983 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../105-qttranslations5-l10n_5.15.3-1_all.deb ... 2026-03-09T14:24:20.984 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking qttranslations5-l10n (5.15.3-1) ... 2026-03-09T14:24:20.999 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-joblib. 2026-03-09T14:24:21.004 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../071-python3-joblib_0.17.0-4ubuntu1_all.deb ... 2026-03-09T14:24:21.006 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T14:24:21.043 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-threadpoolctl. 2026-03-09T14:24:21.049 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../072-python3-threadpoolctl_3.1.0-1_all.deb ... 2026-03-09T14:24:21.065 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-threadpoolctl (3.1.0-1) ... 2026-03-09T14:24:21.070 INFO:teuthology.orchestra.run.vm03.stdout:nvmf-connect.target is a disabled or a static unit, not starting it. 2026-03-09T14:24:21.076 INFO:teuthology.orchestra.run.vm03.stdout:Could not execute systemctl: at /usr/bin/deb-systemd-invoke line 142. 2026-03-09T14:24:21.079 INFO:teuthology.orchestra.run.vm03.stdout:Setting up cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:21.080 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-sklearn. 2026-03-09T14:24:21.085 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../073-python3-sklearn_0.23.2-5ubuntu6_all.deb ... 2026-03-09T14:24:21.085 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T14:24:21.091 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package radosgw. 2026-03-09T14:24:21.098 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../106-radosgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:21.099 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:21.129 INFO:teuthology.orchestra.run.vm03.stdout:Adding system user cephadm....done 2026-03-09T14:24:21.138 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T14:24:21.215 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-jaraco.classes (3.2.1-3) ... 2026-03-09T14:24:21.216 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package ceph-mgr-diskprediction-local. 2026-03-09T14:24:21.222 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../074-ceph-mgr-diskprediction-local_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T14:24:21.223 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:21.303 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:20 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:21.303 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:24:20 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:21.303 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:24:20 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:21.304 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:24:20 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:21.304 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:24:20 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:21.321 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T14:24:21.329 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-jaraco.functools (3.4.0-2) ... 2026-03-09T14:24:21.343 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package rbd-fuse. 2026-03-09T14:24:21.349 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../107-rbd-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:21.350 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:21.367 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package smartmontools. 2026-03-09T14:24:21.372 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../108-smartmontools_7.2-1ubuntu0.1_amd64.deb ... 2026-03-09T14:24:21.379 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T14:24:21.402 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-repoze.lru (0.7-2) ... 2026-03-09T14:24:21.468 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-cachetools. 2026-03-09T14:24:21.469 INFO:teuthology.orchestra.run.vm04.stdout:Setting up smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T14:24:21.473 INFO:teuthology.orchestra.run.vm03.stdout:Setting up liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T14:24:21.474 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../075-python3-cachetools_5.0.0-1_all.deb ... 2026-03-09T14:24:21.475 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-py (1.10.0-1) ... 2026-03-09T14:24:21.475 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-cachetools (5.0.0-1) ... 2026-03-09T14:24:21.491 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-rsa. 2026-03-09T14:24:21.497 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../076-python3-rsa_4.8-1_all.deb ... 2026-03-09T14:24:21.498 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-rsa (4.8-1) ... 2026-03-09T14:24:21.520 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-google-auth. 2026-03-09T14:24:21.526 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../077-python3-google-auth_1.5.1-3_all.deb ... 2026-03-09T14:24:21.527 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-google-auth (1.5.1-3) ... 2026-03-09T14:24:21.547 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-requests-oauthlib. 2026-03-09T14:24:21.553 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../078-python3-requests-oauthlib_1.3.0+ds-0.1_all.deb ... 2026-03-09T14:24:21.554 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T14:24:21.573 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T14:24:21.575 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-websocket. 2026-03-09T14:24:21.581 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../079-python3-websocket_1.2.3-1_all.deb ... 2026-03-09T14:24:21.582 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-websocket (1.2.3-1) ... 2026-03-09T14:24:21.601 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-kubernetes. 2026-03-09T14:24:21.606 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../080-python3-kubernetes_12.0.1-1ubuntu1_all.deb ... 2026-03-09T14:24:21.621 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T14:24:21.700 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-cachetools (5.0.0-1) ... 2026-03-09T14:24:21.725 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/smartd.service → /lib/systemd/system/smartmontools.service. 2026-03-09T14:24:21.725 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/smartmontools.service → /lib/systemd/system/smartmontools.service. 2026-03-09T14:24:21.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:21 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:21.758 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:24:21 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:21.758 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:24:21 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:21.758 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:24:21 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:21.776 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package ceph-mgr-k8sevents. 2026-03-09T14:24:21.779 INFO:teuthology.orchestra.run.vm03.stdout:Setting up unzip (6.0-26ubuntu3.2) ... 2026-03-09T14:24:21.782 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../081-ceph-mgr-k8sevents_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T14:24:21.783 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:21.789 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pyinotify (0.9.6-1.3) ... 2026-03-09T14:24:21.799 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package libonig5:amd64. 2026-03-09T14:24:21.805 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../082-libonig5_6.9.7.1-2build1_amd64.deb ... 2026-03-09T14:24:21.806 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T14:24:21.828 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package libjq1:amd64. 2026-03-09T14:24:21.835 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../083-libjq1_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-09T14:24:21.836 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T14:24:21.855 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package jq. 2026-03-09T14:24:21.862 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../084-jq_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-09T14:24:21.863 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking jq (1.6-2.1ubuntu3.1) ... 2026-03-09T14:24:21.865 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-threadpoolctl (3.1.0-1) ... 2026-03-09T14:24:21.879 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package socat. 2026-03-09T14:24:21.887 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../085-socat_1.7.4.1-3ubuntu4_amd64.deb ... 2026-03-09T14:24:21.888 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking socat (1.7.4.1-3ubuntu4) ... 2026-03-09T14:24:21.916 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package xmlstarlet. 2026-03-09T14:24:21.922 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../086-xmlstarlet_1.6.1-2.1_amd64.deb ... 2026-03-09T14:24:21.923 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking xmlstarlet (1.6.1-2.1) ... 2026-03-09T14:24:21.939 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:21.977 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package ceph-test. 2026-03-09T14:24:21.983 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../087-ceph-test_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:21.983 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:22.012 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T14:24:22.015 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libnbd0 (1.10.5-1) ... 2026-03-09T14:24:22.017 INFO:teuthology.orchestra.run.vm03.stdout:Setting up lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T14:24:22.020 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T14:24:22.023 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T14:24:22.025 INFO:teuthology.orchestra.run.vm03.stdout:Setting up lua5.1 (5.1.5-8.1build4) ... 2026-03-09T14:24:22.029 INFO:teuthology.orchestra.run.vm03.stdout:update-alternatives: using /usr/bin/lua5.1 to provide /usr/bin/lua (lua-interpreter) in auto mode 2026-03-09T14:24:22.032 INFO:teuthology.orchestra.run.vm03.stdout:update-alternatives: using /usr/bin/luac5.1 to provide /usr/bin/luac (lua-compiler) in auto mode 2026-03-09T14:24:22.034 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T14:24:22.037 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-psutil (5.9.0-1build1) ... 2026-03-09T14:24:22.131 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:21 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:22.131 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:24:21 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:22.132 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:24:21 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:22.132 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:24:21 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:22.132 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:24:22 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:22.171 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-natsort (8.0.2-1) ... 2026-03-09T14:24:22.217 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-iniconfig (1.1.1-2) ... 2026-03-09T14:24:22.251 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T14:24:22.298 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T14:24:22.301 INFO:teuthology.orchestra.run.vm04.stdout:Setting up nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T14:24:22.329 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-simplejson (3.17.6-1build1) ... 2026-03-09T14:24:22.374 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /lib/systemd/system/nvmefc-boot-connections.service. 2026-03-09T14:24:22.457 INFO:teuthology.orchestra.run.vm03.stdout:Setting up zip (3.0-12build2) ... 2026-03-09T14:24:22.492 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:22 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:22.492 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:24:22 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:22.492 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:24:22 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:22.513 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-09T14:24:22.627 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmf-autoconnect.service → /lib/systemd/system/nvmf-autoconnect.service. 2026-03-09T14:24:22.647 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package ceph-volume. 2026-03-09T14:24:22.653 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../088-ceph-volume_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T14:24:22.654 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:22.683 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package libcephfs-dev. 2026-03-09T14:24:22.689 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../089-libcephfs-dev_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:22.689 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:22.709 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package lua-socket:amd64. 2026-03-09T14:24:22.718 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../090-lua-socket_3.0~rc1+git+ac3201d-6_amd64.deb ... 2026-03-09T14:24:22.719 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T14:24:22.748 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:22 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:22.748 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:22 vm04 bash[19581]: cluster 2026-03-09T14:24:20.962392+0000 mgr.x (mgr.14150) 332 : cluster [DBG] pgmap v276: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:22.748 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:22 vm04 bash[19581]: cluster 2026-03-09T14:24:20.962392+0000 mgr.x (mgr.14150) 332 : cluster [DBG] pgmap v276: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:22.748 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:24:22 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:22.748 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:24:22 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:22.748 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:24:22 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:22.753 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package lua-sec:amd64. 2026-03-09T14:24:22.754 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../091-lua-sec_1.0.2-1_amd64.deb ... 2026-03-09T14:24:22.755 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking lua-sec:amd64 (1.0.2-1) ... 2026-03-09T14:24:22.772 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package nvme-cli. 2026-03-09T14:24:22.779 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../092-nvme-cli_1.16-3ubuntu0.3_amd64.deb ... 2026-03-09T14:24:22.780 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T14:24:22.818 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T14:24:22.826 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package pkg-config. 2026-03-09T14:24:22.832 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../093-pkg-config_0.29.2-1ubuntu3_amd64.deb ... 2026-03-09T14:24:22.834 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T14:24:22.853 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python-asyncssh-doc. 2026-03-09T14:24:22.860 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../094-python-asyncssh-doc_2.5.0-1ubuntu0.1_all.deb ... 2026-03-09T14:24:22.861 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T14:24:22.890 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T14:24:22.892 INFO:teuthology.orchestra.run.vm03.stdout:Setting up qttranslations5-l10n (5.15.3-1) ... 2026-03-09T14:24:22.895 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T14:24:22.909 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-iniconfig. 2026-03-09T14:24:22.916 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../095-python3-iniconfig_1.1.1-2_all.deb ... 2026-03-09T14:24:22.917 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-iniconfig (1.1.1-2) ... 2026-03-09T14:24:22.935 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-pastescript. 2026-03-09T14:24:22.941 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../096-python3-pastescript_2.0.2-4_all.deb ... 2026-03-09T14:24:22.942 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-pastescript (2.0.2-4) ... 2026-03-09T14:24:22.966 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-pluggy. 2026-03-09T14:24:22.973 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../097-python3-pluggy_0.13.0-7.1_all.deb ... 2026-03-09T14:24:22.974 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-pluggy (0.13.0-7.1) ... 2026-03-09T14:24:22.992 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T14:24:22.994 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-psutil. 2026-03-09T14:24:23.000 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../098-python3-psutil_5.9.0-1build1_amd64.deb ... 2026-03-09T14:24:23.001 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-psutil (5.9.0-1build1) ... 2026-03-09T14:24:23.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:22 vm05 bash[20070]: cluster 2026-03-09T14:24:20.962392+0000 mgr.x (mgr.14150) 332 : cluster [DBG] pgmap v276: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:23.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:22 vm05 bash[20070]: cluster 2026-03-09T14:24:20.962392+0000 mgr.x (mgr.14150) 332 : cluster [DBG] pgmap v276: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:23.009 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:22 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:23.009 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:22 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:23.009 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:24:22 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:23.009 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:24:22 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:23.009 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:24:22 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:23.009 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:24:22 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:23.009 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:24:22 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:23.009 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:24:22 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:23.028 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-py. 2026-03-09T14:24:23.029 INFO:teuthology.orchestra.run.vm04.stdout:nvmf-connect.target is a disabled or a static unit, not starting it. 2026-03-09T14:24:23.034 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../099-python3-py_1.10.0-1_all.deb ... 2026-03-09T14:24:23.035 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-py (1.10.0-1) ... 2026-03-09T14:24:23.039 INFO:teuthology.orchestra.run.vm04.stdout:Could not execute systemctl: at /usr/bin/deb-systemd-invoke line 142. 2026-03-09T14:24:23.040 INFO:teuthology.orchestra.run.vm04.stdout:Setting up cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:23.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:22 vm03 bash[17524]: cluster 2026-03-09T14:24:20.962392+0000 mgr.x (mgr.14150) 332 : cluster [DBG] pgmap v276: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:23.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:22 vm03 bash[17524]: cluster 2026-03-09T14:24:20.962392+0000 mgr.x (mgr.14150) 332 : cluster [DBG] pgmap v276: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:23.060 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-pygments. 2026-03-09T14:24:23.066 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../100-python3-pygments_2.11.2+dfsg-2ubuntu0.1_all.deb ... 2026-03-09T14:24:23.067 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-09T14:24:23.098 INFO:teuthology.orchestra.run.vm04.stdout:Adding system user cephadm....done 2026-03-09T14:24:23.114 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T14:24:23.132 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-pyinotify. 2026-03-09T14:24:23.137 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../101-python3-pyinotify_0.9.6-1.3_all.deb ... 2026-03-09T14:24:23.138 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-pyinotify (0.9.6-1.3) ... 2026-03-09T14:24:23.144 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T14:24:23.152 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-toml. 2026-03-09T14:24:23.156 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../102-python3-toml_0.10.2-1_all.deb ... 2026-03-09T14:24:23.157 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-toml (0.10.2-1) ... 2026-03-09T14:24:23.171 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-pytest. 2026-03-09T14:24:23.175 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../103-python3-pytest_6.2.5-1ubuntu2_all.deb ... 2026-03-09T14:24:23.176 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-pytest (6.2.5-1ubuntu2) ... 2026-03-09T14:24:23.196 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-jaraco.classes (3.2.1-3) ... 2026-03-09T14:24:23.203 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-simplejson. 2026-03-09T14:24:23.209 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../104-python3-simplejson_3.17.6-1build1_amd64.deb ... 2026-03-09T14:24:23.210 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-simplejson (3.17.6-1build1) ... 2026-03-09T14:24:23.230 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package qttranslations5-l10n. 2026-03-09T14:24:23.234 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../105-qttranslations5-l10n_5.15.3-1_all.deb ... 2026-03-09T14:24:23.235 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking qttranslations5-l10n (5.15.3-1) ... 2026-03-09T14:24:23.266 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T14:24:23.269 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-jaraco.functools (3.4.0-2) ... 2026-03-09T14:24:23.277 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T14:24:23.361 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-repoze.lru (0.7-2) ... 2026-03-09T14:24:23.375 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T14:24:23.376 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package radosgw. 2026-03-09T14:24:23.382 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../106-radosgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:23.384 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:23.441 INFO:teuthology.orchestra.run.vm04.stdout:Setting up liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T14:24:23.444 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-py (1.10.0-1) ... 2026-03-09T14:24:23.502 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-jaraco.text (3.6.0-2) ... 2026-03-09T14:24:23.590 INFO:teuthology.orchestra.run.vm03.stdout:Setting up socat (1.7.4.1-3ubuntu4) ... 2026-03-09T14:24:23.591 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T14:24:23.593 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:23.605 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package rbd-fuse. 2026-03-09T14:24:23.612 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../107-rbd-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T14:24:23.613 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:23.641 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package smartmontools. 2026-03-09T14:24:23.647 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../108-smartmontools_7.2-1ubuntu0.1_amd64.deb ... 2026-03-09T14:24:23.655 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T14:24:23.696 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T14:24:23.700 INFO:teuthology.orchestra.run.vm05.stdout:Setting up smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T14:24:23.741 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-cachetools (5.0.0-1) ... 2026-03-09T14:24:23.815 INFO:teuthology.orchestra.run.vm04.stdout:Setting up unzip (6.0-26ubuntu3.2) ... 2026-03-09T14:24:23.823 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-pyinotify (0.9.6-1.3) ... 2026-03-09T14:24:23.898 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-threadpoolctl (3.1.0-1) ... 2026-03-09T14:24:23.971 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:23.992 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/smartd.service → /lib/systemd/system/smartmontools.service. 2026-03-09T14:24:23.993 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/smartmontools.service → /lib/systemd/system/smartmontools.service. 2026-03-09T14:24:23.993 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:24:23 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:23.993 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:24:23 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:23.993 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:23 vm05 bash[20070]: cluster 2026-03-09T14:24:22.962706+0000 mgr.x (mgr.14150) 333 : cluster [DBG] pgmap v277: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:23.993 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:23 vm05 bash[20070]: cluster 2026-03-09T14:24:22.962706+0000 mgr.x (mgr.14150) 333 : cluster [DBG] pgmap v277: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:23.993 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:23 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:23.993 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:24:23 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:23.993 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:24:23 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:24.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:23 vm04 bash[19581]: cluster 2026-03-09T14:24:22.962706+0000 mgr.x (mgr.14150) 333 : cluster [DBG] pgmap v277: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:24.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:23 vm04 bash[19581]: cluster 2026-03-09T14:24:22.962706+0000 mgr.x (mgr.14150) 333 : cluster [DBG] pgmap v277: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:24.046 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T14:24:24.048 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libnbd0 (1.10.5-1) ... 2026-03-09T14:24:24.051 INFO:teuthology.orchestra.run.vm04.stdout:Setting up lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T14:24:24.053 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T14:24:24.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:23 vm03 bash[17524]: cluster 2026-03-09T14:24:22.962706+0000 mgr.x (mgr.14150) 333 : cluster [DBG] pgmap v277: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:24.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:23 vm03 bash[17524]: cluster 2026-03-09T14:24:22.962706+0000 mgr.x (mgr.14150) 333 : cluster [DBG] pgmap v277: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:24.055 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T14:24:24.057 INFO:teuthology.orchestra.run.vm04.stdout:Setting up lua5.1 (5.1.5-8.1build4) ... 2026-03-09T14:24:24.061 INFO:teuthology.orchestra.run.vm04.stdout:update-alternatives: using /usr/bin/lua5.1 to provide /usr/bin/lua (lua-interpreter) in auto mode 2026-03-09T14:24:24.063 INFO:teuthology.orchestra.run.vm04.stdout:update-alternatives: using /usr/bin/luac5.1 to provide /usr/bin/luac (lua-compiler) in auto mode 2026-03-09T14:24:24.065 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T14:24:24.067 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-psutil (5.9.0-1build1) ... 2026-03-09T14:24:24.198 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-natsort (8.0.2-1) ... 2026-03-09T14:24:24.258 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:24 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:24.258 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:24:24 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:24.258 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:24:24 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:24.258 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:24:24 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:24.258 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:24:24 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:24.274 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T14:24:24.287 INFO:teuthology.orchestra.run.vm03.stdout:Setting up pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T14:24:24.311 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T14:24:24.316 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-toml (0.10.2-1) ... 2026-03-09T14:24:24.350 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-simplejson (3.17.6-1build1) ... 2026-03-09T14:24:24.391 INFO:teuthology.orchestra.run.vm03.stdout:Setting up librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T14:24:24.394 INFO:teuthology.orchestra.run.vm03.stdout:Setting up xmlstarlet (1.6.1-2.1) ... 2026-03-09T14:24:24.396 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pluggy (0.13.0-7.1) ... 2026-03-09T14:24:24.437 INFO:teuthology.orchestra.run.vm04.stdout:Setting up zip (3.0-12build2) ... 2026-03-09T14:24:24.440 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-09T14:24:24.468 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-zc.lockfile (2.0-1) ... 2026-03-09T14:24:24.478 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-iniconfig (1.1.1-2) ... 2026-03-09T14:24:24.537 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T14:24:24.540 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-rsa (4.8-1) ... 2026-03-09T14:24:24.548 INFO:teuthology.orchestra.run.vm05.stdout:Setting up libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T14:24:24.550 INFO:teuthology.orchestra.run.vm05.stdout:Setting up nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T14:24:24.617 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-singledispatch (3.4.0.3-3) ... 2026-03-09T14:24:24.617 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /lib/systemd/system/nvmefc-boot-connections.service. 2026-03-09T14:24:24.688 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-logutils (0.3.3-8) ... 2026-03-09T14:24:24.727 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T14:24:24.749 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:24 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:24.749 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:24:24 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:24.749 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:24:24 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:24.749 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:24:24 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:24.749 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:24:24 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:24.764 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-tempora (4.1.2-1) ... 2026-03-09T14:24:24.802 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T14:24:24.805 INFO:teuthology.orchestra.run.vm04.stdout:Setting up qttranslations5-l10n (5.15.3-1) ... 2026-03-09T14:24:24.807 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T14:24:24.831 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-simplegeneric (0.8.1-3) ... 2026-03-09T14:24:24.888 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmf-autoconnect.service → /lib/systemd/system/nvmf-autoconnect.service. 2026-03-09T14:24:24.899 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-prettytable (2.5.0-2) ... 2026-03-09T14:24:24.902 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T14:24:24.978 INFO:teuthology.orchestra.run.vm03.stdout:Setting up liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T14:24:24.980 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-websocket (1.2.3-1) ... 2026-03-09T14:24:25.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:24 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:25.008 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:24:24 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:25.008 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:24:24 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:25.008 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:24:24 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:25.009 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:24:24 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:25.043 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T14:24:25.063 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T14:24:25.065 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T14:24:25.137 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T14:24:25.174 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T14:24:25.224 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T14:24:25.270 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T14:24:25.317 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-jaraco.collections (3.4.0-2) ... 2026-03-09T14:24:25.363 INFO:teuthology.orchestra.run.vm05.stdout:nvmf-connect.target is a disabled or a static unit, not starting it. 2026-03-09T14:24:25.366 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:25 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:25.366 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:25 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:25.366 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:24:25 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:25.366 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:24:25 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:25.366 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:24:25 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:25.366 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:24:25 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:25.367 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:24:25 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:25.367 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:24:25 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:25.367 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:24:25 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:25.367 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:24:25 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:25.370 INFO:teuthology.orchestra.run.vm05.stdout:Could not execute systemctl: at /usr/bin/deb-systemd-invoke line 142. 2026-03-09T14:24:25.372 INFO:teuthology.orchestra.run.vm05.stdout:Setting up cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:25.386 INFO:teuthology.orchestra.run.vm03.stdout:Setting up liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T14:24:25.389 INFO:teuthology.orchestra.run.vm03.stdout:Setting up lua-sec:amd64 (1.0.2-1) ... 2026-03-09T14:24:25.391 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-jaraco.text (3.6.0-2) ... 2026-03-09T14:24:25.392 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T14:24:25.394 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pytest (6.2.5-1ubuntu2) ... 2026-03-09T14:24:25.417 INFO:teuthology.orchestra.run.vm05.stdout:Adding system user cephadm....done 2026-03-09T14:24:25.427 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T14:24:25.461 INFO:teuthology.orchestra.run.vm04.stdout:Setting up socat (1.7.4.1-3ubuntu4) ... 2026-03-09T14:24:25.464 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:25.508 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-jaraco.classes (3.2.1-3) ... 2026-03-09T14:24:25.537 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pastedeploy (2.1.1-1) ... 2026-03-09T14:24:25.558 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T14:24:25.573 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T14:24:25.576 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-jaraco.functools (3.4.0-2) ... 2026-03-09T14:24:25.609 INFO:teuthology.orchestra.run.vm03.stdout:Setting up lua-any (27ubuntu1) ... 2026-03-09T14:24:25.611 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-portend (3.0.0-1) ... 2026-03-09T14:24:25.643 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-repoze.lru (0.7-2) ... 2026-03-09T14:24:25.678 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T14:24:25.681 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-google-auth (1.5.1-3) ... 2026-03-09T14:24:25.717 INFO:teuthology.orchestra.run.vm05.stdout:Setting up liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T14:24:25.720 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-py (1.10.0-1) ... 2026-03-09T14:24:25.758 INFO:teuthology.orchestra.run.vm03.stdout:Setting up jq (1.6-2.1ubuntu3.1) ... 2026-03-09T14:24:25.760 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-webtest (2.0.35-1) ... 2026-03-09T14:24:25.815 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T14:24:25.834 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-cherrypy3 (18.6.1-4) ... 2026-03-09T14:24:25.949 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-cachetools (5.0.0-1) ... 2026-03-09T14:24:25.971 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pastescript (2.0.2-4) ... 2026-03-09T14:24:26.018 INFO:teuthology.orchestra.run.vm05.stdout:Setting up unzip (6.0-26ubuntu3.2) ... 2026-03-09T14:24:26.031 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-pyinotify (0.9.6-1.3) ... 2026-03-09T14:24:26.063 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T14:24:26.107 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-threadpoolctl (3.1.0-1) ... 2026-03-09T14:24:26.138 INFO:teuthology.orchestra.run.vm04.stdout:Setting up pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T14:24:26.163 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T14:24:26.168 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-toml (0.10.2-1) ... 2026-03-09T14:24:26.180 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T14:24:26.182 INFO:teuthology.orchestra.run.vm03.stdout:Setting up librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:26.183 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:26.184 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:26.187 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T14:24:26.246 INFO:teuthology.orchestra.run.vm04.stdout:Setting up librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T14:24:26.248 INFO:teuthology.orchestra.run.vm04.stdout:Setting up xmlstarlet (1.6.1-2.1) ... 2026-03-09T14:24:26.251 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-pluggy (0.13.0-7.1) ... 2026-03-09T14:24:26.252 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T14:24:26.254 INFO:teuthology.orchestra.run.vm05.stdout:Setting up libnbd0 (1.10.5-1) ... 2026-03-09T14:24:26.257 INFO:teuthology.orchestra.run.vm05.stdout:Setting up lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T14:24:26.260 INFO:teuthology.orchestra.run.vm05.stdout:Setting up libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T14:24:26.262 INFO:teuthology.orchestra.run.vm05.stdout:Setting up libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T14:24:26.265 INFO:teuthology.orchestra.run.vm05.stdout:Setting up lua5.1 (5.1.5-8.1build4) ... 2026-03-09T14:24:26.269 INFO:teuthology.orchestra.run.vm05.stdout:update-alternatives: using /usr/bin/lua5.1 to provide /usr/bin/lua (lua-interpreter) in auto mode 2026-03-09T14:24:26.271 INFO:teuthology.orchestra.run.vm05.stdout:update-alternatives: using /usr/bin/luac5.1 to provide /usr/bin/luac (lua-compiler) in auto mode 2026-03-09T14:24:26.274 INFO:teuthology.orchestra.run.vm05.stdout:Setting up libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T14:24:26.276 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-psutil (5.9.0-1build1) ... 2026-03-09T14:24:26.303 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:26 vm03 bash[17524]: cluster 2026-03-09T14:24:24.963037+0000 mgr.x (mgr.14150) 334 : cluster [DBG] pgmap v278: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:26.303 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:26 vm03 bash[17524]: cluster 2026-03-09T14:24:24.963037+0000 mgr.x (mgr.14150) 334 : cluster [DBG] pgmap v278: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:26.324 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-zc.lockfile (2.0-1) ... 2026-03-09T14:24:26.396 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T14:24:26.399 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-rsa (4.8-1) ... 2026-03-09T14:24:26.406 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-natsort (8.0.2-1) ... 2026-03-09T14:24:26.474 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-singledispatch (3.4.0.3-3) ... 2026-03-09T14:24:26.487 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T14:24:26.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:26 vm05 bash[20070]: cluster 2026-03-09T14:24:24.963037+0000 mgr.x (mgr.14150) 334 : cluster [DBG] pgmap v278: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:26.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:26 vm05 bash[20070]: cluster 2026-03-09T14:24:24.963037+0000 mgr.x (mgr.14150) 334 : cluster [DBG] pgmap v278: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:26.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:26 vm04 bash[19581]: cluster 2026-03-09T14:24:24.963037+0000 mgr.x (mgr.14150) 334 : cluster [DBG] pgmap v278: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:26.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:26 vm04 bash[19581]: cluster 2026-03-09T14:24:24.963037+0000 mgr.x (mgr.14150) 334 : cluster [DBG] pgmap v278: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:26.548 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-logutils (0.3.3-8) ... 2026-03-09T14:24:26.564 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-simplejson (3.17.6-1build1) ... 2026-03-09T14:24:26.624 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-tempora (4.1.2-1) ... 2026-03-09T14:24:26.658 INFO:teuthology.orchestra.run.vm05.stdout:Setting up zip (3.0-12build2) ... 2026-03-09T14:24:26.661 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-09T14:24:26.698 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-simplegeneric (0.8.1-3) ... 2026-03-09T14:24:26.765 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-prettytable (2.5.0-2) ... 2026-03-09T14:24:26.795 INFO:teuthology.orchestra.run.vm03.stdout:Setting up luarocks (3.8.0+dfsg1-1) ... 2026-03-09T14:24:26.802 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:26.804 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:26.807 INFO:teuthology.orchestra.run.vm03.stdout:Setting up librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:26.809 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:26.812 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:26.840 INFO:teuthology.orchestra.run.vm04.stdout:Setting up liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T14:24:26.843 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-websocket (1.2.3-1) ... 2026-03-09T14:24:26.876 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/remote-fs.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-09T14:24:26.876 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-09T14:24:26.923 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T14:24:26.925 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T14:24:26.944 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T14:24:27.006 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T14:24:27.019 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T14:24:27.021 INFO:teuthology.orchestra.run.vm05.stdout:Setting up qttranslations5-l10n (5.15.3-1) ... 2026-03-09T14:24:27.024 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T14:24:27.103 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T14:24:27.121 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T14:24:27.199 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-jaraco.collections (3.4.0-2) ... 2026-03-09T14:24:27.248 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:24:27 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:27.248 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:24:27 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:27.248 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:24:27 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:27.249 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:24:27 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:27.249 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:27 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:27.266 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T14:24:27.270 INFO:teuthology.orchestra.run.vm04.stdout:Setting up liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T14:24:27.273 INFO:teuthology.orchestra.run.vm04.stdout:Setting up lua-sec:amd64 (1.0.2-1) ... 2026-03-09T14:24:27.275 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T14:24:27.279 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-pytest (6.2.5-1ubuntu2) ... 2026-03-09T14:24:27.348 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:27.351 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:27.354 INFO:teuthology.orchestra.run.vm03.stdout:Setting up librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:27.356 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:27.358 INFO:teuthology.orchestra.run.vm03.stdout:Setting up rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:27.361 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:27.363 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:27.365 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:27.400 INFO:teuthology.orchestra.run.vm03.stdout:Adding group ceph....done 2026-03-09T14:24:27.402 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T14:24:27.423 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-pastedeploy (2.1.1-1) ... 2026-03-09T14:24:27.438 INFO:teuthology.orchestra.run.vm03.stdout:Adding system user ceph....done 2026-03-09T14:24:27.447 INFO:teuthology.orchestra.run.vm03.stdout:Setting system user ceph properties....done 2026-03-09T14:24:27.452 INFO:teuthology.orchestra.run.vm03.stdout:Fixing /var/run/ceph ownership....done 2026-03-09T14:24:27.457 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:24:27 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:27.457 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:24:27 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:27.457 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:27 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:27.457 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:24:27 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:27.457 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:24:27 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:27.496 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T14:24:27.497 INFO:teuthology.orchestra.run.vm04.stdout:Setting up lua-any (27ubuntu1) ... 2026-03-09T14:24:27.500 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-portend (3.0.0-1) ... 2026-03-09T14:24:27.572 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T14:24:27.575 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-google-auth (1.5.1-3) ... 2026-03-09T14:24:27.617 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-jaraco.text (3.6.0-2) ... 2026-03-09T14:24:27.661 INFO:teuthology.orchestra.run.vm04.stdout:Setting up jq (1.6-2.1ubuntu3.1) ... 2026-03-09T14:24:27.663 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-webtest (2.0.35-1) ... 2026-03-09T14:24:27.687 INFO:teuthology.orchestra.run.vm05.stdout:Setting up socat (1.7.4.1-3ubuntu4) ... 2026-03-09T14:24:27.689 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:27.745 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-cherrypy3 (18.6.1-4) ... 2026-03-09T14:24:27.787 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /lib/systemd/system/rbdmap.service. 2026-03-09T14:24:27.788 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:27 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:27.788 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:24:27 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:27.788 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:24:27 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:27.788 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:24:27 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:27.789 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:24:27 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:27.789 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:24:27 vm03 bash[37744]: debug there is no tcmu-runner data available 2026-03-09T14:24:27.791 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T14:24:27.898 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-pastescript (2.0.2-4) ... 2026-03-09T14:24:27.990 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T14:24:28.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:27 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:28.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:28 vm03 bash[17524]: cluster 2026-03-09T14:24:26.964792+0000 mgr.x (mgr.14150) 335 : cluster [DBG] pgmap v279: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:28.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:28 vm03 bash[17524]: cluster 2026-03-09T14:24:26.964792+0000 mgr.x (mgr.14150) 335 : cluster [DBG] pgmap v279: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:28.053 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:24:27 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:28.053 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:24:27 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:28.054 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:24:27 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:28.054 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:24:27 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:28.119 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T14:24:28.121 INFO:teuthology.orchestra.run.vm04.stdout:Setting up librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:28.124 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:28.126 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T14:24:28.234 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:28.237 INFO:teuthology.orchestra.run.vm03.stdout:Setting up radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:28.392 INFO:teuthology.orchestra.run.vm05.stdout:Setting up pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T14:24:28.413 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:28 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:28.414 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:24:28 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:28.414 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:24:28 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:28.414 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:24:28 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:28.414 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:24:28 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:28.415 INFO:teuthology.orchestra.run.vm05.stdout:Setting up libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T14:24:28.420 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-toml (0.10.2-1) ... 2026-03-09T14:24:28.499 INFO:teuthology.orchestra.run.vm05.stdout:Setting up librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T14:24:28.502 INFO:teuthology.orchestra.run.vm05.stdout:Setting up xmlstarlet (1.6.1-2.1) ... 2026-03-09T14:24:28.504 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-pluggy (0.13.0-7.1) ... 2026-03-09T14:24:28.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:28 vm05 bash[20070]: cluster 2026-03-09T14:24:26.964792+0000 mgr.x (mgr.14150) 335 : cluster [DBG] pgmap v279: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:28.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:28 vm05 bash[20070]: cluster 2026-03-09T14:24:26.964792+0000 mgr.x (mgr.14150) 335 : cluster [DBG] pgmap v279: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:28.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:28 vm04 bash[19581]: cluster 2026-03-09T14:24:26.964792+0000 mgr.x (mgr.14150) 335 : cluster [DBG] pgmap v279: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:28.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:28 vm04 bash[19581]: cluster 2026-03-09T14:24:26.964792+0000 mgr.x (mgr.14150) 335 : cluster [DBG] pgmap v279: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:28.542 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-09T14:24:28.542 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-09T14:24:28.576 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-zc.lockfile (2.0-1) ... 2026-03-09T14:24:28.643 INFO:teuthology.orchestra.run.vm05.stdout:Setting up libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T14:24:28.646 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-rsa (4.8-1) ... 2026-03-09T14:24:28.682 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:28 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:28.682 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:24:28 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:28.682 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:24:28 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:28.682 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:24:28 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:28.682 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:24:28 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:28.720 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-singledispatch (3.4.0.3-3) ... 2026-03-09T14:24:28.724 INFO:teuthology.orchestra.run.vm04.stdout:Setting up luarocks (3.8.0+dfsg1-1) ... 2026-03-09T14:24:28.732 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:28.734 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:28.737 INFO:teuthology.orchestra.run.vm04.stdout:Setting up librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:28.740 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:28.742 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:28.794 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-logutils (0.3.3-8) ... 2026-03-09T14:24:28.804 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/remote-fs.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-09T14:24:28.805 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-09T14:24:28.873 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-tempora (4.1.2-1) ... 2026-03-09T14:24:28.941 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-simplegeneric (0.8.1-3) ... 2026-03-09T14:24:28.986 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:28 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:28.986 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:28 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:28.986 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:24:28 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:28.986 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:24:28 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:28.987 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:24:28 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:28.987 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:24:28 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:28.987 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:24:28 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:28.987 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:24:28 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:28.987 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:24:28 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:28.987 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:24:28 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:28.997 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:29.008 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:24:28 vm05 bash[38699]: debug there is no tcmu-runner data available 2026-03-09T14:24:29.009 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-prettytable (2.5.0-2) ... 2026-03-09T14:24:29.089 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /lib/systemd/system/ceph-crash.service. 2026-03-09T14:24:29.090 INFO:teuthology.orchestra.run.vm05.stdout:Setting up liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T14:24:29.092 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-websocket (1.2.3-1) ... 2026-03-09T14:24:29.179 INFO:teuthology.orchestra.run.vm05.stdout:Setting up libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T14:24:29.181 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T14:24:29.187 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:28 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:29.187 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:29 vm04 bash[19581]: audit 2026-03-09T14:24:27.721725+0000 mgr.x (mgr.14150) 336 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:29.187 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:29 vm04 bash[19581]: audit 2026-03-09T14:24:27.721725+0000 mgr.x (mgr.14150) 336 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:29.187 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:24:28 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:29.187 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:24:28 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:29.187 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:24:28 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:29.250 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T14:24:29.272 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:29.274 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:29.276 INFO:teuthology.orchestra.run.vm04.stdout:Setting up librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:29.279 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:29.281 INFO:teuthology.orchestra.run.vm04.stdout:Setting up rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:29.284 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:29.286 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:29.289 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:29.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:29 vm03 bash[17524]: audit 2026-03-09T14:24:27.721725+0000 mgr.x (mgr.14150) 336 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:29.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:29 vm03 bash[17524]: audit 2026-03-09T14:24:27.721725+0000 mgr.x (mgr.14150) 336 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:29.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:29 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:29.292 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:24:29 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:29.292 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:24:29 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:29.293 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:24:29 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:29.293 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:24:29 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:29.327 INFO:teuthology.orchestra.run.vm04.stdout:Adding group ceph....done 2026-03-09T14:24:29.342 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T14:24:29.366 INFO:teuthology.orchestra.run.vm04.stdout:Adding system user ceph....done 2026-03-09T14:24:29.376 INFO:teuthology.orchestra.run.vm04.stdout:Setting system user ceph properties....done 2026-03-09T14:24:29.383 INFO:teuthology.orchestra.run.vm04.stdout:Fixing /var/run/ceph ownership....done 2026-03-09T14:24:29.387 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:29 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:29.387 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:24:29 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:29.387 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:24:29 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:29.387 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:24:29 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:29.435 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-jaraco.collections (3.4.0-2) ... 2026-03-09T14:24:29.503 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:29.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:29 vm05 bash[20070]: audit 2026-03-09T14:24:27.721725+0000 mgr.x (mgr.14150) 336 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:29.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:29 vm05 bash[20070]: audit 2026-03-09T14:24:27.721725+0000 mgr.x (mgr.14150) 336 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:29.513 INFO:teuthology.orchestra.run.vm05.stdout:Setting up liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T14:24:29.516 INFO:teuthology.orchestra.run.vm05.stdout:Setting up lua-sec:amd64 (1.0.2-1) ... 2026-03-09T14:24:29.519 INFO:teuthology.orchestra.run.vm05.stdout:Setting up libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T14:24:29.522 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-pytest (6.2.5-1ubuntu2) ... 2026-03-09T14:24:29.553 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:24:29 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:29.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:29 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:29.553 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:24:29 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:29.553 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:24:29 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:29.554 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:24:29 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:29.581 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-09T14:24:29.581 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-09T14:24:29.667 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-pastedeploy (2.1.1-1) ... 2026-03-09T14:24:29.738 INFO:teuthology.orchestra.run.vm05.stdout:Setting up lua-any (27ubuntu1) ... 2026-03-09T14:24:29.741 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-portend (3.0.0-1) ... 2026-03-09T14:24:29.742 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /lib/systemd/system/rbdmap.service. 2026-03-09T14:24:29.744 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:29 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:29.744 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:24:29 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:29.744 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:24:29 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:29.744 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:24:29 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:29.805 INFO:teuthology.orchestra.run.vm05.stdout:Setting up libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T14:24:29.808 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-google-auth (1.5.1-3) ... 2026-03-09T14:24:29.885 INFO:teuthology.orchestra.run.vm05.stdout:Setting up jq (1.6-2.1ubuntu3.1) ... 2026-03-09T14:24:29.888 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-webtest (2.0.35-1) ... 2026-03-09T14:24:29.909 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:29 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:29.910 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:24:29 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:29.910 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:24:29 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:29.910 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:24:29 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:29.910 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:24:29 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:29.962 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-cherrypy3 (18.6.1-4) ... 2026-03-09T14:24:29.990 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:30.010 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:24:29 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:30.011 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:24:29 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:30.011 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:29 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:30.011 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:24:29 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:30.063 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-09T14:24:30.063 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-09T14:24:30.104 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-pastescript (2.0.2-4) ... 2026-03-09T14:24:30.151 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:30.154 INFO:teuthology.orchestra.run.vm04.stdout:Setting up radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:30.182 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:29 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:30.182 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:30 vm03 bash[17524]: audit 2026-03-09T14:24:28.690314+0000 mgr.x (mgr.14150) 337 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:30.182 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:30 vm03 bash[17524]: audit 2026-03-09T14:24:28.690314+0000 mgr.x (mgr.14150) 337 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:30.182 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:30 vm03 bash[17524]: cluster 2026-03-09T14:24:28.965104+0000 mgr.x (mgr.14150) 338 : cluster [DBG] pgmap v280: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:30.182 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:30 vm03 bash[17524]: cluster 2026-03-09T14:24:28.965104+0000 mgr.x (mgr.14150) 338 : cluster [DBG] pgmap v280: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:30.182 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:30 vm03 bash[17524]: audit 2026-03-09T14:24:29.751886+0000 mon.a (mon.0) 736 : audit [DBG] from='client.? 192.168.123.103:0/3681600305' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-09T14:24:30.182 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:30 vm03 bash[17524]: audit 2026-03-09T14:24:29.751886+0000 mon.a (mon.0) 736 : audit [DBG] from='client.? 192.168.123.103:0/3681600305' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-09T14:24:30.182 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:24:29 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:30.182 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:24:29 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:30.183 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:24:29 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:30.183 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:24:29 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:30.198 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T14:24:30.296 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:30 vm04 bash[19581]: audit 2026-03-09T14:24:28.690314+0000 mgr.x (mgr.14150) 337 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:30.296 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:30 vm04 bash[19581]: audit 2026-03-09T14:24:28.690314+0000 mgr.x (mgr.14150) 337 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:30.296 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:30 vm04 bash[19581]: cluster 2026-03-09T14:24:28.965104+0000 mgr.x (mgr.14150) 338 : cluster [DBG] pgmap v280: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:30.296 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:30 vm04 bash[19581]: cluster 2026-03-09T14:24:28.965104+0000 mgr.x (mgr.14150) 338 : cluster [DBG] pgmap v280: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:30.296 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:30 vm04 bash[19581]: audit 2026-03-09T14:24:29.751886+0000 mon.a (mon.0) 736 : audit [DBG] from='client.? 192.168.123.103:0/3681600305' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-09T14:24:30.296 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:30 vm04 bash[19581]: audit 2026-03-09T14:24:29.751886+0000 mon.a (mon.0) 736 : audit [DBG] from='client.? 192.168.123.103:0/3681600305' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-09T14:24:30.296 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:30 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:30.296 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:24:30 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:30.296 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:24:30 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:30.296 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:24:30 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:30.314 INFO:teuthology.orchestra.run.vm05.stdout:Setting up libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T14:24:30.316 INFO:teuthology.orchestra.run.vm05.stdout:Setting up librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:30.319 INFO:teuthology.orchestra.run.vm05.stdout:Setting up libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:30.321 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T14:24:30.442 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-09T14:24:30.442 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-09T14:24:30.494 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:30 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:30.494 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:30 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:30.494 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:24:30 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:30.494 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:24:30 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:30.494 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:24:30 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:30.494 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:24:30 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:30.494 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:24:30 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:30.494 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:24:30 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:30.494 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:24:30 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:30.494 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:24:30 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:30.497 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:30.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:30 vm05 bash[20070]: audit 2026-03-09T14:24:28.690314+0000 mgr.x (mgr.14150) 337 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:30.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:30 vm05 bash[20070]: audit 2026-03-09T14:24:28.690314+0000 mgr.x (mgr.14150) 337 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:30.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:30 vm05 bash[20070]: cluster 2026-03-09T14:24:28.965104+0000 mgr.x (mgr.14150) 338 : cluster [DBG] pgmap v280: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:30.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:30 vm05 bash[20070]: cluster 2026-03-09T14:24:28.965104+0000 mgr.x (mgr.14150) 338 : cluster [DBG] pgmap v280: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:30.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:30 vm05 bash[20070]: audit 2026-03-09T14:24:29.751886+0000 mon.a (mon.0) 736 : audit [DBG] from='client.? 192.168.123.103:0/3681600305' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-09T14:24:30.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:30 vm05 bash[20070]: audit 2026-03-09T14:24:29.751886+0000 mon.a (mon.0) 736 : audit [DBG] from='client.? 192.168.123.103:0/3681600305' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-09T14:24:30.576 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:30 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:30.576 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:24:30 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:30.576 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:24:30 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:30.576 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:24:30 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:30.579 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-09T14:24:30.579 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-09T14:24:30.790 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:24:30 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:30.790 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:24:30 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:30.791 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:30 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:30.791 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:24:30 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:30.791 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:24:30 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:30.846 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:24:30 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:30.854 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:24:30 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:30.854 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:24:30 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:30.854 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:24:30 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:30.854 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:30 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:30.854 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:30 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:30.854 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:24:30 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:30.854 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:24:30 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:31.034 INFO:teuthology.orchestra.run.vm05.stdout:Setting up luarocks (3.8.0+dfsg1-1) ... 2026-03-09T14:24:31.035 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:31.053 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:24:30 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:31.075 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:24:30 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:31.075 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:24:30 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:31.075 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:24:30 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:31.075 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:30 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:31.117 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:31.126 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:31.130 INFO:teuthology.orchestra.run.vm05.stdout:Setting up libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:31.133 INFO:teuthology.orchestra.run.vm05.stdout:Setting up libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:31.135 INFO:teuthology.orchestra.run.vm05.stdout:Setting up librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:31.138 INFO:teuthology.orchestra.run.vm05.stdout:Setting up ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:31.140 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:31.140 INFO:teuthology.orchestra.run.vm05.stdout:Setting up ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:31.201 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/remote-fs.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-09T14:24:31.201 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-09T14:24:31.205 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-09T14:24:31.205 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-09T14:24:31.209 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /lib/systemd/system/ceph-crash.service. 2026-03-09T14:24:31.419 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:31 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:31.419 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:24:31 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:31.419 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:24:31 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:31.419 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:24:31 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:31.419 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:24:31 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:31.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:31 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:31.508 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:24:31 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:31.508 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:24:31 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:31.508 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:24:31 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:31.508 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:24:31 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:31.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:31 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:31.508 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:24:31 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:31.509 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:24:31 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:31.509 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:24:31 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:31.585 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:31.598 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:31.602 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:31.616 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:31.618 INFO:teuthology.orchestra.run.vm05.stdout:Setting up libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:31.621 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:31.623 INFO:teuthology.orchestra.run.vm05.stdout:Setting up librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:31.625 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:31.628 INFO:teuthology.orchestra.run.vm05.stdout:Setting up rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:31.631 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:31.633 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:31.635 INFO:teuthology.orchestra.run.vm05.stdout:Setting up ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:31.669 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:31.673 INFO:teuthology.orchestra.run.vm05.stdout:Adding group ceph....done 2026-03-09T14:24:31.716 INFO:teuthology.orchestra.run.vm05.stdout:Adding system user ceph....done 2026-03-09T14:24:31.724 INFO:teuthology.orchestra.run.vm05.stdout:Setting system user ceph properties....done 2026-03-09T14:24:31.729 INFO:teuthology.orchestra.run.vm05.stdout:Fixing /var/run/ceph ownership....done 2026-03-09T14:24:31.733 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:31 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:31.733 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:24:31 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:31.733 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:24:31 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:31.733 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:24:31 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:31.733 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:24:31 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:31.742 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-09T14:24:31.743 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-09T14:24:31.743 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-09T14:24:31.754 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T14:24:31.770 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T14:24:31.803 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:24:31 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:31.803 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:24:31 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:31.803 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:24:31 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:31.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:31 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:31.803 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:24:31 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:31.858 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for install-info (6.8-4build1) ... 2026-03-09T14:24:31.893 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:31 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:31.894 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:24:31 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:31.894 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:24:31 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:31.894 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:24:31 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:32.075 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /lib/systemd/system/rbdmap.service. 2026-03-09T14:24:32.075 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:24:31 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:32.075 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:24:31 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:32.075 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:31 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:32.075 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:24:31 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:32.075 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:24:31 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:32.188 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:24:31 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:32.188 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:24:32 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:32.188 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:24:31 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:32.188 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:24:32 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:32.188 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:24:31 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:32.188 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:24:32 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:32.188 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:31 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:32.188 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:32 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:32.188 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:32 vm04 bash[19581]: cluster 2026-03-09T14:24:30.965401+0000 mgr.x (mgr.14150) 339 : cluster [DBG] pgmap v281: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:32.188 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:32 vm04 bash[19581]: cluster 2026-03-09T14:24:30.965401+0000 mgr.x (mgr.14150) 339 : cluster [DBG] pgmap v281: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:32.188 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:32 vm04 bash[19581]: audit 2026-03-09T14:24:31.892571+0000 mon.a (mon.0) 737 : audit [DBG] from='client.? 192.168.123.104:0/1992975607' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-09T14:24:32.188 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:32 vm04 bash[19581]: audit 2026-03-09T14:24:31.892571+0000 mon.a (mon.0) 737 : audit [DBG] from='client.? 192.168.123.104:0/1992975607' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-09T14:24:32.190 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:32.222 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:24:32.223 INFO:teuthology.orchestra.run.vm03.stdout:Running kernel seems to be up-to-date. 2026-03-09T14:24:32.223 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:24:32.223 INFO:teuthology.orchestra.run.vm03.stdout:Services to be restarted: 2026-03-09T14:24:32.229 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart packagekit.service 2026-03-09T14:24:32.232 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:24:32.232 INFO:teuthology.orchestra.run.vm03.stdout:Service restarts being deferred: 2026-03-09T14:24:32.232 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart networkd-dispatcher.service 2026-03-09T14:24:32.232 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart unattended-upgrades.service 2026-03-09T14:24:32.232 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:24:32.232 INFO:teuthology.orchestra.run.vm03.stdout:No containers need to be restarted. 2026-03-09T14:24:32.232 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:24:32.232 INFO:teuthology.orchestra.run.vm03.stdout:No user sessions are running outdated binaries. 2026-03-09T14:24:32.232 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:24:32.232 INFO:teuthology.orchestra.run.vm03.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-09T14:24:32.253 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-09T14:24:32.253 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-09T14:24:32.393 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:24:32 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:32.393 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:24:32 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:32.393 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:24:32 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:32.393 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:32 vm05 bash[20070]: cluster 2026-03-09T14:24:30.965401+0000 mgr.x (mgr.14150) 339 : cluster [DBG] pgmap v281: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:32.393 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:32 vm05 bash[20070]: cluster 2026-03-09T14:24:30.965401+0000 mgr.x (mgr.14150) 339 : cluster [DBG] pgmap v281: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:32.393 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:32 vm05 bash[20070]: audit 2026-03-09T14:24:31.892571+0000 mon.a (mon.0) 737 : audit [DBG] from='client.? 192.168.123.104:0/1992975607' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-09T14:24:32.393 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:32 vm05 bash[20070]: audit 2026-03-09T14:24:31.892571+0000 mon.a (mon.0) 737 : audit [DBG] from='client.? 192.168.123.104:0/1992975607' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-09T14:24:32.393 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:32 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:32.393 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:24:32 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:32.484 INFO:teuthology.orchestra.run.vm05.stdout:Setting up ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:32.487 INFO:teuthology.orchestra.run.vm05.stdout:Setting up radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:32.500 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:24:32 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:32.500 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:24:32 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:32.501 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:32 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:32.501 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:24:32 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:32.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:32 vm03 bash[17524]: cluster 2026-03-09T14:24:30.965401+0000 mgr.x (mgr.14150) 339 : cluster [DBG] pgmap v281: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:32.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:32 vm03 bash[17524]: cluster 2026-03-09T14:24:30.965401+0000 mgr.x (mgr.14150) 339 : cluster [DBG] pgmap v281: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:32.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:32 vm03 bash[17524]: audit 2026-03-09T14:24:31.892571+0000 mon.a (mon.0) 737 : audit [DBG] from='client.? 192.168.123.104:0/1992975607' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-09T14:24:32.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:32 vm03 bash[17524]: audit 2026-03-09T14:24:31.892571+0000 mon.a (mon.0) 737 : audit [DBG] from='client.? 192.168.123.104:0/1992975607' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-09T14:24:32.745 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:32.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:32 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:32.758 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:24:32 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:32.758 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:24:32 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:32.758 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:24:32 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:32.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:32 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:32.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:32 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:32.758 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:24:32 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:32.758 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:24:32 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:32.758 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:24:32 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:32.758 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:24:32 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:32.758 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:24:32 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:32.758 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:24:32 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:32.758 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:24:32 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:32.758 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:24:32 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:32.769 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-09T14:24:32.769 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-09T14:24:32.822 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-09T14:24:32.822 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-09T14:24:33.108 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:32 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:33.109 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:24:32 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:33.109 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:24:32 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:33.109 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:24:32 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:33.109 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:24:32 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:33.115 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:32 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:33.115 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:24:32 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:33.115 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:24:32 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:33.115 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:24:32 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:33.136 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:24:33.139 DEBUG:teuthology.orchestra.run.vm03:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install open-iscsi multipath-tools python3-xmltodict python3-jmespath 2026-03-09T14:24:33.198 INFO:teuthology.orchestra.run.vm05.stdout:Setting up ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:33.205 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:33.208 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:33.214 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T14:24:33.223 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:33.291 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /lib/systemd/system/ceph-crash.service. 2026-03-09T14:24:33.299 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-09T14:24:33.299 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-09T14:24:33.423 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T14:24:33.424 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T14:24:33.437 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:33 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:33.437 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:24:33 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:33.437 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:24:33 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:33.437 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:24:33 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:33.437 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:24:33 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:33.450 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:33 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:33.450 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:24:33 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:33.451 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:24:33 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:33.451 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:24:33 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:33.681 INFO:teuthology.orchestra.run.vm03.stdout:open-iscsi is already the newest version (2.1.5-1ubuntu1.1). 2026-03-09T14:24:33.681 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:24:33.682 INFO:teuthology.orchestra.run.vm03.stdout: libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T14:24:33.682 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:24:33.683 INFO:teuthology.orchestra.run.vm03.stdout:Suggested packages: 2026-03-09T14:24:33.683 INFO:teuthology.orchestra.run.vm03.stdout: multipath-tools-boot 2026-03-09T14:24:33.699 INFO:teuthology.orchestra.run.vm03.stdout:The following NEW packages will be installed: 2026-03-09T14:24:33.700 INFO:teuthology.orchestra.run.vm03.stdout: multipath-tools python3-jmespath python3-xmltodict 2026-03-09T14:24:33.701 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:33 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:33.701 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:24:33 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:33.701 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:24:33 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:33.701 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:24:33 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:33.701 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:24:33 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:33.701 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:24:33 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:33.701 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:24:33 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:33.729 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:33 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:33.729 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:33 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:33.729 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:24:33 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:33.729 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:24:33 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:33.729 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:24:33 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:33.729 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:24:33 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:33.729 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:24:33 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:33.729 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:24:33 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:33.730 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:24:33 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:33.730 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:24:33 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:33.739 INFO:teuthology.orchestra.run.vm05.stdout:Setting up ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:33.782 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:33.795 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:33.798 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:33.810 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:33.821 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-09T14:24:33.821 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-09T14:24:33.822 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 3 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T14:24:33.822 INFO:teuthology.orchestra.run.vm03.stdout:Need to get 365 kB of archives. 2026-03-09T14:24:33.822 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 1399 kB of additional disk space will be used. 2026-03-09T14:24:33.822 INFO:teuthology.orchestra.run.vm03.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jmespath all 0.10.0-1 [21.7 kB] 2026-03-09T14:24:33.840 INFO:teuthology.orchestra.run.vm03.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-xmltodict all 0.12.0-2 [12.6 kB] 2026-03-09T14:24:33.842 INFO:teuthology.orchestra.run.vm03.stdout:Get:3 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 multipath-tools amd64 0.8.8-1ubuntu1.22.04.4 [331 kB] 2026-03-09T14:24:33.933 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-09T14:24:33.942 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T14:24:33.957 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T14:24:33.996 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:24:33 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:33.996 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:24:33 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:33.996 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:24:33 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:33.996 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:33 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:33.996 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:24:33 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:34.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:33 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:34.046 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for install-info (6.8-4build1) ... 2026-03-09T14:24:34.076 INFO:teuthology.orchestra.run.vm03.stdout:Fetched 365 kB in 0s (2274 kB/s) 2026-03-09T14:24:34.093 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-jmespath. 2026-03-09T14:24:34.122 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118577 files and directories currently installed.) 2026-03-09T14:24:34.125 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../python3-jmespath_0.10.0-1_all.deb ... 2026-03-09T14:24:34.126 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-jmespath (0.10.0-1) ... 2026-03-09T14:24:34.149 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-xmltodict. 2026-03-09T14:24:34.156 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../python3-xmltodict_0.12.0-2_all.deb ... 2026-03-09T14:24:34.157 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-xmltodict (0.12.0-2) ... 2026-03-09T14:24:34.177 INFO:teuthology.orchestra.run.vm05.stdout:Setting up ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:34.179 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package multipath-tools. 2026-03-09T14:24:34.186 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../multipath-tools_0.8.8-1ubuntu1.22.04.4_amd64.deb ... 2026-03-09T14:24:34.192 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking multipath-tools (0.8.8-1ubuntu1.22.04.4) ... 2026-03-09T14:24:34.248 INFO:teuthology.orchestra.run.vm03.stdout:Setting up multipath-tools (0.8.8-1ubuntu1.22.04.4) ... 2026-03-09T14:24:34.257 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-09T14:24:34.257 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-09T14:24:34.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:34 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:34.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:34 vm05 bash[20070]: cluster 2026-03-09T14:24:32.965735+0000 mgr.x (mgr.14150) 340 : cluster [DBG] pgmap v282: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:34.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:34 vm05 bash[20070]: cluster 2026-03-09T14:24:32.965735+0000 mgr.x (mgr.14150) 340 : cluster [DBG] pgmap v282: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:34.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:34 vm05 bash[20070]: audit 2026-03-09T14:24:33.996036+0000 mon.c (mon.1) 25 : audit [DBG] from='client.? 192.168.123.105:0/2316063834' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-09T14:24:34.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:34 vm05 bash[20070]: audit 2026-03-09T14:24:33.996036+0000 mon.c (mon.1) 25 : audit [DBG] from='client.? 192.168.123.105:0/2316063834' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-09T14:24:34.258 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:24:34 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:34.258 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:24:34 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:34.258 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:24:34 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:34.258 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:24:34 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:34.403 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:34 vm03 bash[17524]: cluster 2026-03-09T14:24:32.965735+0000 mgr.x (mgr.14150) 340 : cluster [DBG] pgmap v282: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:34.403 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:34 vm03 bash[17524]: cluster 2026-03-09T14:24:32.965735+0000 mgr.x (mgr.14150) 340 : cluster [DBG] pgmap v282: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:34.403 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:34 vm03 bash[17524]: audit 2026-03-09T14:24:33.996036+0000 mon.c (mon.1) 25 : audit [DBG] from='client.? 192.168.123.105:0/2316063834' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-09T14:24:34.403 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:34 vm03 bash[17524]: audit 2026-03-09T14:24:33.996036+0000 mon.c (mon.1) 25 : audit [DBG] from='client.? 192.168.123.105:0/2316063834' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-09T14:24:34.415 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:24:34.415 INFO:teuthology.orchestra.run.vm04.stdout:Running kernel seems to be up-to-date. 2026-03-09T14:24:34.415 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:24:34.415 INFO:teuthology.orchestra.run.vm04.stdout:Services to be restarted: 2026-03-09T14:24:34.422 INFO:teuthology.orchestra.run.vm04.stdout: systemctl restart packagekit.service 2026-03-09T14:24:34.431 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:24:34.431 INFO:teuthology.orchestra.run.vm04.stdout:Service restarts being deferred: 2026-03-09T14:24:34.431 INFO:teuthology.orchestra.run.vm04.stdout: systemctl restart networkd-dispatcher.service 2026-03-09T14:24:34.431 INFO:teuthology.orchestra.run.vm04.stdout: systemctl restart unattended-upgrades.service 2026-03-09T14:24:34.431 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:24:34.431 INFO:teuthology.orchestra.run.vm04.stdout:No containers need to be restarted. 2026-03-09T14:24:34.431 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:24:34.431 INFO:teuthology.orchestra.run.vm04.stdout:No user sessions are running outdated binaries. 2026-03-09T14:24:34.431 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:24:34.431 INFO:teuthology.orchestra.run.vm04.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-09T14:24:34.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:34 vm04 bash[19581]: cluster 2026-03-09T14:24:32.965735+0000 mgr.x (mgr.14150) 340 : cluster [DBG] pgmap v282: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:34.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:34 vm04 bash[19581]: cluster 2026-03-09T14:24:32.965735+0000 mgr.x (mgr.14150) 340 : cluster [DBG] pgmap v282: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:34.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:34 vm04 bash[19581]: audit 2026-03-09T14:24:33.996036+0000 mon.c (mon.1) 25 : audit [DBG] from='client.? 192.168.123.105:0/2316063834' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-09T14:24:34.508 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:34 vm04 bash[19581]: audit 2026-03-09T14:24:33.996036+0000 mon.c (mon.1) 25 : audit [DBG] from='client.? 192.168.123.105:0/2316063834' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-09T14:24:34.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:34 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:34.508 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:24:34 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:34.508 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:24:34 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:34.508 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:24:34 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:34.508 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:24:34 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:34.721 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:34 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:34.721 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:34 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:34.721 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:24:34 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:34.721 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:24:34 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:34.721 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:24:34 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:34.721 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:24:34 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:34.721 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:24:34 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:34.721 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:24:34 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:34.721 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:24:34 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:34.721 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:24:34 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:34.738 INFO:teuthology.orchestra.run.vm03.stdout:Could not execute systemctl: at /usr/bin/deb-systemd-invoke line 142. 2026-03-09T14:24:34.740 INFO:teuthology.orchestra.run.vm05.stdout:Setting up ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:34.743 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-xmltodict (0.12.0-2) ... 2026-03-09T14:24:34.810 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-jmespath (0.10.0-1) ... 2026-03-09T14:24:34.824 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-09T14:24:34.824 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-09T14:24:34.881 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T14:24:34.939 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T14:24:34.942 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:24:34 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:34.942 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:24:34 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:34.942 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:24:34 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:34.943 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:34 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:34.943 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:24:34 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:35.233 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:24:34 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:35.233 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:24:35 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:35.233 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:24:34 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:35.233 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:24:35 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:35.233 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:24:34 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:35.233 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:24:35 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:35.233 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:24:34 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:35.233 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:24:35 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:35.233 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:34 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:35.233 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:35 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:35.246 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:24:35.246 INFO:teuthology.orchestra.run.vm03.stdout:Running kernel seems to be up-to-date. 2026-03-09T14:24:35.246 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:24:35.246 INFO:teuthology.orchestra.run.vm03.stdout:Services to be restarted: 2026-03-09T14:24:35.253 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart packagekit.service 2026-03-09T14:24:35.256 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:24:35.257 INFO:teuthology.orchestra.run.vm03.stdout:Service restarts being deferred: 2026-03-09T14:24:35.257 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart networkd-dispatcher.service 2026-03-09T14:24:35.257 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart unattended-upgrades.service 2026-03-09T14:24:35.257 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:24:35.257 INFO:teuthology.orchestra.run.vm03.stdout:No containers need to be restarted. 2026-03-09T14:24:35.257 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:24:35.257 INFO:teuthology.orchestra.run.vm03.stdout:No user sessions are running outdated binaries. 2026-03-09T14:24:35.257 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:24:35.257 INFO:teuthology.orchestra.run.vm03.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-09T14:24:35.294 INFO:teuthology.orchestra.run.vm05.stdout:Setting up ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:35.297 INFO:teuthology.orchestra.run.vm05.stdout:Setting up ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:35.313 INFO:teuthology.orchestra.run.vm05.stdout:Setting up ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:35.366 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:24:35.368 DEBUG:teuthology.orchestra.run.vm04:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install open-iscsi multipath-tools python3-xmltodict python3-jmespath 2026-03-09T14:24:35.374 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-09T14:24:35.374 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-09T14:24:35.447 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T14:24:35.662 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-09T14:24:35.663 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-09T14:24:35.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:35 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:35.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:35 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:35.758 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:24:35 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:35.758 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:24:35 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:35.758 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:24:35 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:35.759 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:24:35 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:35.759 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:24:35 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:35.759 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:24:35 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:35.759 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:24:35 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:35.759 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:24:35 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:35.793 INFO:teuthology.orchestra.run.vm05.stdout:Setting up ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:35.805 INFO:teuthology.orchestra.run.vm05.stdout:Setting up ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:35.807 INFO:teuthology.orchestra.run.vm05.stdout:Setting up ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:35.819 INFO:teuthology.orchestra.run.vm05.stdout:Setting up ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:24:35.832 INFO:teuthology.orchestra.run.vm04.stdout:open-iscsi is already the newest version (2.1.5-1ubuntu1.1). 2026-03-09T14:24:35.832 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:24:35.833 INFO:teuthology.orchestra.run.vm04.stdout: libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T14:24:35.833 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:24:35.834 INFO:teuthology.orchestra.run.vm04.stdout:Suggested packages: 2026-03-09T14:24:35.834 INFO:teuthology.orchestra.run.vm04.stdout: multipath-tools-boot 2026-03-09T14:24:35.849 INFO:teuthology.orchestra.run.vm04.stdout:The following NEW packages will be installed: 2026-03-09T14:24:35.850 INFO:teuthology.orchestra.run.vm04.stdout: multipath-tools python3-jmespath python3-xmltodict 2026-03-09T14:24:35.938 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 3 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T14:24:35.938 INFO:teuthology.orchestra.run.vm04.stdout:Need to get 365 kB of archives. 2026-03-09T14:24:35.938 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 1399 kB of additional disk space will be used. 2026-03-09T14:24:35.938 INFO:teuthology.orchestra.run.vm04.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jmespath all 0.10.0-1 [21.7 kB] 2026-03-09T14:24:35.948 INFO:teuthology.orchestra.run.vm05.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-09T14:24:35.955 INFO:teuthology.orchestra.run.vm04.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-xmltodict all 0.12.0-2 [12.6 kB] 2026-03-09T14:24:35.956 INFO:teuthology.orchestra.run.vm05.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T14:24:35.957 INFO:teuthology.orchestra.run.vm04.stdout:Get:3 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 multipath-tools amd64 0.8.8-1ubuntu1.22.04.4 [331 kB] 2026-03-09T14:24:35.976 INFO:teuthology.orchestra.run.vm05.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T14:24:36.065 INFO:teuthology.orchestra.run.vm05.stdout:Processing triggers for install-info (6.8-4build1) ... 2026-03-09T14:24:36.078 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:24:36.081 DEBUG:teuthology.parallel:result is None 2026-03-09T14:24:36.220 INFO:teuthology.orchestra.run.vm04.stdout:Fetched 365 kB in 0s (2757 kB/s) 2026-03-09T14:24:36.237 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-jmespath. 2026-03-09T14:24:36.274 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118577 files and directories currently installed.) 2026-03-09T14:24:36.277 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../python3-jmespath_0.10.0-1_all.deb ... 2026-03-09T14:24:36.277 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-jmespath (0.10.0-1) ... 2026-03-09T14:24:36.297 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-xmltodict. 2026-03-09T14:24:36.304 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../python3-xmltodict_0.12.0-2_all.deb ... 2026-03-09T14:24:36.318 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-xmltodict (0.12.0-2) ... 2026-03-09T14:24:36.340 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package multipath-tools. 2026-03-09T14:24:36.346 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../multipath-tools_0.8.8-1ubuntu1.22.04.4_amd64.deb ... 2026-03-09T14:24:36.353 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking multipath-tools (0.8.8-1ubuntu1.22.04.4) ... 2026-03-09T14:24:36.410 INFO:teuthology.orchestra.run.vm04.stdout:Setting up multipath-tools (0.8.8-1ubuntu1.22.04.4) ... 2026-03-09T14:24:36.446 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:24:36.446 INFO:teuthology.orchestra.run.vm05.stdout:Running kernel seems to be up-to-date. 2026-03-09T14:24:36.446 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:24:36.446 INFO:teuthology.orchestra.run.vm05.stdout:Services to be restarted: 2026-03-09T14:24:36.453 INFO:teuthology.orchestra.run.vm05.stdout: systemctl restart packagekit.service 2026-03-09T14:24:36.456 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:24:36.456 INFO:teuthology.orchestra.run.vm05.stdout:Service restarts being deferred: 2026-03-09T14:24:36.456 INFO:teuthology.orchestra.run.vm05.stdout: systemctl restart networkd-dispatcher.service 2026-03-09T14:24:36.456 INFO:teuthology.orchestra.run.vm05.stdout: systemctl restart unattended-upgrades.service 2026-03-09T14:24:36.456 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:24:36.456 INFO:teuthology.orchestra.run.vm05.stdout:No containers need to be restarted. 2026-03-09T14:24:36.456 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:24:36.456 INFO:teuthology.orchestra.run.vm05.stdout:No user sessions are running outdated binaries. 2026-03-09T14:24:36.456 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:24:36.456 INFO:teuthology.orchestra.run.vm05.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-09T14:24:36.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:36 vm03 bash[17524]: cluster 2026-03-09T14:24:34.966026+0000 mgr.x (mgr.14150) 341 : cluster [DBG] pgmap v283: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:36.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:36 vm03 bash[17524]: cluster 2026-03-09T14:24:34.966026+0000 mgr.x (mgr.14150) 341 : cluster [DBG] pgmap v283: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:36.669 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:36 vm04 bash[19581]: cluster 2026-03-09T14:24:34.966026+0000 mgr.x (mgr.14150) 341 : cluster [DBG] pgmap v283: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:36.669 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:36 vm04 bash[19581]: cluster 2026-03-09T14:24:34.966026+0000 mgr.x (mgr.14150) 341 : cluster [DBG] pgmap v283: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:36.669 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:36 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:36.669 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:24:36 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:36.669 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:24:36 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:36.670 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:24:36 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:36.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:36 vm05 bash[20070]: cluster 2026-03-09T14:24:34.966026+0000 mgr.x (mgr.14150) 341 : cluster [DBG] pgmap v283: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:36.768 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:36 vm05 bash[20070]: cluster 2026-03-09T14:24:34.966026+0000 mgr.x (mgr.14150) 341 : cluster [DBG] pgmap v283: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:36.933 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:36 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:36.933 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:24:36 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:36.933 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:24:36 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:36.933 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:24:36 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:36.948 INFO:teuthology.orchestra.run.vm04.stdout:Could not execute systemctl: at /usr/bin/deb-systemd-invoke line 142. 2026-03-09T14:24:36.952 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-xmltodict (0.12.0-2) ... 2026-03-09T14:24:37.018 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-jmespath (0.10.0-1) ... 2026-03-09T14:24:37.085 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T14:24:37.147 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T14:24:37.451 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:24:37.451 INFO:teuthology.orchestra.run.vm04.stdout:Running kernel seems to be up-to-date. 2026-03-09T14:24:37.451 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:24:37.451 INFO:teuthology.orchestra.run.vm04.stdout:Services to be restarted: 2026-03-09T14:24:37.458 INFO:teuthology.orchestra.run.vm04.stdout: systemctl restart packagekit.service 2026-03-09T14:24:37.459 INFO:teuthology.orchestra.run.vm05.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:24:37.461 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:24:37.461 INFO:teuthology.orchestra.run.vm04.stdout:Service restarts being deferred: 2026-03-09T14:24:37.461 INFO:teuthology.orchestra.run.vm04.stdout: systemctl restart networkd-dispatcher.service 2026-03-09T14:24:37.461 INFO:teuthology.orchestra.run.vm04.stdout: systemctl restart unattended-upgrades.service 2026-03-09T14:24:37.461 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:24:37.461 INFO:teuthology.orchestra.run.vm04.stdout:No containers need to be restarted. 2026-03-09T14:24:37.461 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:24:37.461 INFO:teuthology.orchestra.run.vm04.stdout:No user sessions are running outdated binaries. 2026-03-09T14:24:37.461 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:24:37.461 INFO:teuthology.orchestra.run.vm04.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-09T14:24:37.463 DEBUG:teuthology.orchestra.run.vm05:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install open-iscsi multipath-tools python3-xmltodict python3-jmespath 2026-03-09T14:24:37.540 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-09T14:24:37.769 INFO:teuthology.orchestra.run.vm05.stdout:Building dependency tree... 2026-03-09T14:24:37.770 INFO:teuthology.orchestra.run.vm05.stdout:Reading state information... 2026-03-09T14:24:37.926 INFO:teuthology.orchestra.run.vm05.stdout:open-iscsi is already the newest version (2.1.5-1ubuntu1.1). 2026-03-09T14:24:37.926 INFO:teuthology.orchestra.run.vm05.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:24:37.926 INFO:teuthology.orchestra.run.vm05.stdout: libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T14:24:37.926 INFO:teuthology.orchestra.run.vm05.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:24:37.927 INFO:teuthology.orchestra.run.vm05.stdout:Suggested packages: 2026-03-09T14:24:37.927 INFO:teuthology.orchestra.run.vm05.stdout: multipath-tools-boot 2026-03-09T14:24:37.936 INFO:teuthology.orchestra.run.vm05.stdout:The following NEW packages will be installed: 2026-03-09T14:24:37.936 INFO:teuthology.orchestra.run.vm05.stdout: multipath-tools python3-jmespath python3-xmltodict 2026-03-09T14:24:38.053 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:24:37 vm03 bash[37744]: debug there is no tcmu-runner data available 2026-03-09T14:24:38.265 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:24:38.268 DEBUG:teuthology.parallel:result is None 2026-03-09T14:24:38.399 INFO:teuthology.orchestra.run.vm05.stdout:0 upgraded, 3 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T14:24:38.399 INFO:teuthology.orchestra.run.vm05.stdout:Need to get 365 kB of archives. 2026-03-09T14:24:38.399 INFO:teuthology.orchestra.run.vm05.stdout:After this operation, 1399 kB of additional disk space will be used. 2026-03-09T14:24:38.399 INFO:teuthology.orchestra.run.vm05.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jmespath all 0.10.0-1 [21.7 kB] 2026-03-09T14:24:38.622 INFO:teuthology.orchestra.run.vm05.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-xmltodict all 0.12.0-2 [12.6 kB] 2026-03-09T14:24:38.646 INFO:teuthology.orchestra.run.vm05.stdout:Get:3 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 multipath-tools amd64 0.8.8-1ubuntu1.22.04.4 [331 kB] 2026-03-09T14:24:38.694 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:38 vm05 bash[20070]: cluster 2026-03-09T14:24:36.966300+0000 mgr.x (mgr.14150) 342 : cluster [DBG] pgmap v284: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:38.694 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:38 vm05 bash[20070]: cluster 2026-03-09T14:24:36.966300+0000 mgr.x (mgr.14150) 342 : cluster [DBG] pgmap v284: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:38.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:38 vm04 bash[19581]: cluster 2026-03-09T14:24:36.966300+0000 mgr.x (mgr.14150) 342 : cluster [DBG] pgmap v284: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:38.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:38 vm04 bash[19581]: cluster 2026-03-09T14:24:36.966300+0000 mgr.x (mgr.14150) 342 : cluster [DBG] pgmap v284: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:38.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:38 vm03 bash[17524]: cluster 2026-03-09T14:24:36.966300+0000 mgr.x (mgr.14150) 342 : cluster [DBG] pgmap v284: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:38.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:38 vm03 bash[17524]: cluster 2026-03-09T14:24:36.966300+0000 mgr.x (mgr.14150) 342 : cluster [DBG] pgmap v284: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:39.008 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:24:38 vm05 bash[38699]: debug there is no tcmu-runner data available 2026-03-09T14:24:39.149 INFO:teuthology.orchestra.run.vm05.stdout:Fetched 365 kB in 1s (353 kB/s) 2026-03-09T14:24:39.165 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-jmespath. 2026-03-09T14:24:39.186 INFO:teuthology.orchestra.run.vm05.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118577 files and directories currently installed.) 2026-03-09T14:24:39.187 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../python3-jmespath_0.10.0-1_all.deb ... 2026-03-09T14:24:39.239 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-jmespath (0.10.0-1) ... 2026-03-09T14:24:39.254 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-xmltodict. 2026-03-09T14:24:39.258 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../python3-xmltodict_0.12.0-2_all.deb ... 2026-03-09T14:24:39.259 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-xmltodict (0.12.0-2) ... 2026-03-09T14:24:39.277 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package multipath-tools. 2026-03-09T14:24:39.282 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../multipath-tools_0.8.8-1ubuntu1.22.04.4_amd64.deb ... 2026-03-09T14:24:39.286 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking multipath-tools (0.8.8-1ubuntu1.22.04.4) ... 2026-03-09T14:24:39.328 INFO:teuthology.orchestra.run.vm05.stdout:Setting up multipath-tools (0.8.8-1ubuntu1.22.04.4) ... 2026-03-09T14:24:39.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:39 vm04 bash[19581]: audit 2026-03-09T14:24:37.732522+0000 mgr.x (mgr.14150) 343 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:39.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:39 vm04 bash[19581]: audit 2026-03-09T14:24:37.732522+0000 mgr.x (mgr.14150) 343 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:39.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:39 vm05 bash[20070]: audit 2026-03-09T14:24:37.732522+0000 mgr.x (mgr.14150) 343 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:39.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:39 vm05 bash[20070]: audit 2026-03-09T14:24:37.732522+0000 mgr.x (mgr.14150) 343 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:39.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:39 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:39.758 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:24:39 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:39.758 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:24:39 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:39.758 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:24:39 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:39.759 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:24:39 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:39.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:39 vm03 bash[17524]: audit 2026-03-09T14:24:37.732522+0000 mgr.x (mgr.14150) 343 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:39.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:39 vm03 bash[17524]: audit 2026-03-09T14:24:37.732522+0000 mgr.x (mgr.14150) 343 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:39.876 INFO:teuthology.orchestra.run.vm05.stdout:Could not execute systemctl: at /usr/bin/deb-systemd-invoke line 142. 2026-03-09T14:24:39.881 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-xmltodict (0.12.0-2) ... 2026-03-09T14:24:39.948 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-jmespath (0.10.0-1) ... 2026-03-09T14:24:40.106 INFO:teuthology.orchestra.run.vm05.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T14:24:40.227 INFO:teuthology.orchestra.run.vm05.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T14:24:40.258 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:24:39 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:40.258 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:24:39 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:40.258 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:24:39 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:40.258 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:39 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:40.258 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:24:39 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:24:40.545 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:24:40.545 INFO:teuthology.orchestra.run.vm05.stdout:Running kernel seems to be up-to-date. 2026-03-09T14:24:40.545 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:24:40.545 INFO:teuthology.orchestra.run.vm05.stdout:Services to be restarted: 2026-03-09T14:24:40.552 INFO:teuthology.orchestra.run.vm05.stdout: systemctl restart packagekit.service 2026-03-09T14:24:40.556 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:24:40.556 INFO:teuthology.orchestra.run.vm05.stdout:Service restarts being deferred: 2026-03-09T14:24:40.556 INFO:teuthology.orchestra.run.vm05.stdout: systemctl restart networkd-dispatcher.service 2026-03-09T14:24:40.556 INFO:teuthology.orchestra.run.vm05.stdout: systemctl restart unattended-upgrades.service 2026-03-09T14:24:40.556 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:24:40.556 INFO:teuthology.orchestra.run.vm05.stdout:No containers need to be restarted. 2026-03-09T14:24:40.556 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:24:40.556 INFO:teuthology.orchestra.run.vm05.stdout:No user sessions are running outdated binaries. 2026-03-09T14:24:40.556 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:24:40.556 INFO:teuthology.orchestra.run.vm05.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-09T14:24:40.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:40 vm04 bash[19581]: audit 2026-03-09T14:24:38.694473+0000 mgr.x (mgr.14150) 344 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:40.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:40 vm04 bash[19581]: audit 2026-03-09T14:24:38.694473+0000 mgr.x (mgr.14150) 344 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:40.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:40 vm04 bash[19581]: cluster 2026-03-09T14:24:38.966548+0000 mgr.x (mgr.14150) 345 : cluster [DBG] pgmap v285: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:40.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:40 vm04 bash[19581]: cluster 2026-03-09T14:24:38.966548+0000 mgr.x (mgr.14150) 345 : cluster [DBG] pgmap v285: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:40.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:40 vm05 bash[20070]: audit 2026-03-09T14:24:38.694473+0000 mgr.x (mgr.14150) 344 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:40.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:40 vm05 bash[20070]: audit 2026-03-09T14:24:38.694473+0000 mgr.x (mgr.14150) 344 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:40.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:40 vm05 bash[20070]: cluster 2026-03-09T14:24:38.966548+0000 mgr.x (mgr.14150) 345 : cluster [DBG] pgmap v285: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:40.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:40 vm05 bash[20070]: cluster 2026-03-09T14:24:38.966548+0000 mgr.x (mgr.14150) 345 : cluster [DBG] pgmap v285: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:40.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:40 vm03 bash[17524]: audit 2026-03-09T14:24:38.694473+0000 mgr.x (mgr.14150) 344 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:40.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:40 vm03 bash[17524]: audit 2026-03-09T14:24:38.694473+0000 mgr.x (mgr.14150) 344 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:40.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:40 vm03 bash[17524]: cluster 2026-03-09T14:24:38.966548+0000 mgr.x (mgr.14150) 345 : cluster [DBG] pgmap v285: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:40.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:40 vm03 bash[17524]: cluster 2026-03-09T14:24:38.966548+0000 mgr.x (mgr.14150) 345 : cluster [DBG] pgmap v285: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:41.511 INFO:teuthology.orchestra.run.vm05.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:24:41.514 DEBUG:teuthology.parallel:result is None 2026-03-09T14:24:41.515 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T14:24:42.152 DEBUG:teuthology.orchestra.run.vm03:> dpkg-query -W -f '${Version}' ceph 2026-03-09T14:24:42.161 INFO:teuthology.orchestra.run.vm03.stdout:19.2.3-678-ge911bdeb-1jammy 2026-03-09T14:24:42.161 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678-ge911bdeb-1jammy 2026-03-09T14:24:42.161 INFO:teuthology.task.install:The correct ceph version 19.2.3-678-ge911bdeb-1jammy is installed. 2026-03-09T14:24:42.162 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T14:24:42.748 DEBUG:teuthology.orchestra.run.vm04:> dpkg-query -W -f '${Version}' ceph 2026-03-09T14:24:42.757 INFO:teuthology.orchestra.run.vm04.stdout:19.2.3-678-ge911bdeb-1jammy 2026-03-09T14:24:42.757 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678-ge911bdeb-1jammy 2026-03-09T14:24:42.758 INFO:teuthology.task.install:The correct ceph version 19.2.3-678-ge911bdeb-1jammy is installed. 2026-03-09T14:24:42.759 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T14:24:42.760 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:42 vm05 bash[20070]: cluster 2026-03-09T14:24:40.966814+0000 mgr.x (mgr.14150) 346 : cluster [DBG] pgmap v286: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:42.760 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:42 vm05 bash[20070]: cluster 2026-03-09T14:24:40.966814+0000 mgr.x (mgr.14150) 346 : cluster [DBG] pgmap v286: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:42.760 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:42 vm04 bash[19581]: cluster 2026-03-09T14:24:40.966814+0000 mgr.x (mgr.14150) 346 : cluster [DBG] pgmap v286: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:42.760 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:42 vm04 bash[19581]: cluster 2026-03-09T14:24:40.966814+0000 mgr.x (mgr.14150) 346 : cluster [DBG] pgmap v286: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:42.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:42 vm03 bash[17524]: cluster 2026-03-09T14:24:40.966814+0000 mgr.x (mgr.14150) 346 : cluster [DBG] pgmap v286: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:42.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:42 vm03 bash[17524]: cluster 2026-03-09T14:24:40.966814+0000 mgr.x (mgr.14150) 346 : cluster [DBG] pgmap v286: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:43.335 DEBUG:teuthology.orchestra.run.vm05:> dpkg-query -W -f '${Version}' ceph 2026-03-09T14:24:43.344 INFO:teuthology.orchestra.run.vm05.stdout:19.2.3-678-ge911bdeb-1jammy 2026-03-09T14:24:43.344 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678-ge911bdeb-1jammy 2026-03-09T14:24:43.344 INFO:teuthology.task.install:The correct ceph version 19.2.3-678-ge911bdeb-1jammy is installed. 2026-03-09T14:24:43.344 INFO:teuthology.task.install.util:Shipping valgrind.supp... 2026-03-09T14:24:43.344 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T14:24:43.345 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-09T14:24:43.352 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T14:24:43.352 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-09T14:24:43.359 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T14:24:43.359 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-09T14:24:43.392 INFO:teuthology.task.install.util:Shipping 'daemon-helper'... 2026-03-09T14:24:43.392 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T14:24:43.392 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/usr/bin/daemon-helper 2026-03-09T14:24:43.398 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-09T14:24:43.446 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T14:24:43.446 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/usr/bin/daemon-helper 2026-03-09T14:24:43.453 DEBUG:teuthology.orchestra.run.vm04:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-09T14:24:43.499 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T14:24:43.499 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/usr/bin/daemon-helper 2026-03-09T14:24:43.507 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-09T14:24:43.557 INFO:teuthology.task.install.util:Shipping 'adjust-ulimits'... 2026-03-09T14:24:43.557 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T14:24:43.557 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-09T14:24:43.563 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-09T14:24:43.610 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T14:24:43.610 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-09T14:24:43.617 DEBUG:teuthology.orchestra.run.vm04:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-09T14:24:43.664 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T14:24:43.664 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-09T14:24:43.672 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-09T14:24:43.721 INFO:teuthology.task.install.util:Shipping 'stdin-killer'... 2026-03-09T14:24:43.722 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T14:24:43.722 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/usr/bin/stdin-killer 2026-03-09T14:24:43.729 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-09T14:24:43.780 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T14:24:43.780 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/usr/bin/stdin-killer 2026-03-09T14:24:43.787 DEBUG:teuthology.orchestra.run.vm04:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-09T14:24:43.836 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T14:24:43.836 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/usr/bin/stdin-killer 2026-03-09T14:24:43.844 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-09T14:24:43.893 INFO:teuthology.run_tasks:Running task ceph_iscsi_client... 2026-03-09T14:24:43.895 INFO:tasks.ceph_iscsi_client:Setting up ceph-iscsi client... 2026-03-09T14:24:43.895 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T14:24:43.895 DEBUG:teuthology.orchestra.run.vm04:> sudo mkdir -p /etc/iscsi 2026-03-09T14:24:43.895 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/iscsi/initiatorname.iscsi 2026-03-09T14:24:43.906 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl restart iscsid 2026-03-09T14:24:43.995 DEBUG:teuthology.orchestra.run.vm04:> sudo modprobe dm_multipath 2026-03-09T14:24:44.001 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T14:24:44.001 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/multipath.conf 2026-03-09T14:24:44.048 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl start multipathd 2026-03-09T14:24:44.097 INFO:teuthology.run_tasks:Running task cram... 2026-03-09T14:24:44.101 INFO:tasks.cram:Pulling tests from https://github.com/kshtsk/ceph.git ref 569c3e99c9b32a51b4eaf08731c728f4513ed589 2026-03-09T14:24:44.101 DEBUG:teuthology.orchestra.run.vm03:> mkdir -- /home/ubuntu/cephtest/archive/cram.client.0 && python3 -m venv /home/ubuntu/cephtest/virtualenv && /home/ubuntu/cephtest/virtualenv/bin/pip install cram==0.6 2026-03-09T14:24:44.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:44 vm04 bash[19581]: cluster 2026-03-09T14:24:42.967037+0000 mgr.x (mgr.14150) 347 : cluster [DBG] pgmap v287: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:44.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:44 vm04 bash[19581]: cluster 2026-03-09T14:24:42.967037+0000 mgr.x (mgr.14150) 347 : cluster [DBG] pgmap v287: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:44.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:44 vm05 bash[20070]: cluster 2026-03-09T14:24:42.967037+0000 mgr.x (mgr.14150) 347 : cluster [DBG] pgmap v287: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:44.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:44 vm05 bash[20070]: cluster 2026-03-09T14:24:42.967037+0000 mgr.x (mgr.14150) 347 : cluster [DBG] pgmap v287: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:44.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:44 vm03 bash[17524]: cluster 2026-03-09T14:24:42.967037+0000 mgr.x (mgr.14150) 347 : cluster [DBG] pgmap v287: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:44.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:44 vm03 bash[17524]: cluster 2026-03-09T14:24:42.967037+0000 mgr.x (mgr.14150) 347 : cluster [DBG] pgmap v287: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:46.057 INFO:teuthology.orchestra.run.vm03.stdout:Collecting cram==0.6 2026-03-09T14:24:46.099 INFO:teuthology.orchestra.run.vm03.stdout: Downloading cram-0.6-py2.py3-none-any.whl (17 kB) 2026-03-09T14:24:46.118 INFO:teuthology.orchestra.run.vm03.stdout:Installing collected packages: cram 2026-03-09T14:24:46.125 INFO:teuthology.orchestra.run.vm03.stdout:Successfully installed cram-0.6 2026-03-09T14:24:46.166 DEBUG:teuthology.orchestra.run.vm03:> rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone https://github.com/kshtsk/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout 569c3e99c9b32a51b4eaf08731c728f4513ed589 2026-03-09T14:24:46.170 INFO:teuthology.orchestra.run.vm03.stderr:Cloning into '/home/ubuntu/cephtest/clone.client.0'... 2026-03-09T14:24:46.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:46 vm04 bash[19581]: cluster 2026-03-09T14:24:44.967291+0000 mgr.x (mgr.14150) 348 : cluster [DBG] pgmap v288: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:46.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:46 vm04 bash[19581]: cluster 2026-03-09T14:24:44.967291+0000 mgr.x (mgr.14150) 348 : cluster [DBG] pgmap v288: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:46.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:46 vm05 bash[20070]: cluster 2026-03-09T14:24:44.967291+0000 mgr.x (mgr.14150) 348 : cluster [DBG] pgmap v288: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:46.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:46 vm05 bash[20070]: cluster 2026-03-09T14:24:44.967291+0000 mgr.x (mgr.14150) 348 : cluster [DBG] pgmap v288: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:46.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:46 vm03 bash[17524]: cluster 2026-03-09T14:24:44.967291+0000 mgr.x (mgr.14150) 348 : cluster [DBG] pgmap v288: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:46.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:46 vm03 bash[17524]: cluster 2026-03-09T14:24:44.967291+0000 mgr.x (mgr.14150) 348 : cluster [DBG] pgmap v288: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:48.053 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:24:47 vm03 bash[37744]: debug there is no tcmu-runner data available 2026-03-09T14:24:48.705 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:48 vm05 bash[20070]: cluster 2026-03-09T14:24:46.967584+0000 mgr.x (mgr.14150) 349 : cluster [DBG] pgmap v289: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:48.705 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:48 vm05 bash[20070]: cluster 2026-03-09T14:24:46.967584+0000 mgr.x (mgr.14150) 349 : cluster [DBG] pgmap v289: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:48.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:48 vm04 bash[19581]: cluster 2026-03-09T14:24:46.967584+0000 mgr.x (mgr.14150) 349 : cluster [DBG] pgmap v289: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:48.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:48 vm04 bash[19581]: cluster 2026-03-09T14:24:46.967584+0000 mgr.x (mgr.14150) 349 : cluster [DBG] pgmap v289: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:48.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:48 vm03 bash[17524]: cluster 2026-03-09T14:24:46.967584+0000 mgr.x (mgr.14150) 349 : cluster [DBG] pgmap v289: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:48.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:48 vm03 bash[17524]: cluster 2026-03-09T14:24:46.967584+0000 mgr.x (mgr.14150) 349 : cluster [DBG] pgmap v289: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:49.008 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:24:48 vm05 bash[38699]: debug there is no tcmu-runner data available 2026-03-09T14:24:49.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:49 vm04 bash[19581]: audit 2026-03-09T14:24:47.743191+0000 mgr.x (mgr.14150) 350 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:49.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:49 vm04 bash[19581]: audit 2026-03-09T14:24:47.743191+0000 mgr.x (mgr.14150) 350 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:49.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:49 vm05 bash[20070]: audit 2026-03-09T14:24:47.743191+0000 mgr.x (mgr.14150) 350 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:49.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:49 vm05 bash[20070]: audit 2026-03-09T14:24:47.743191+0000 mgr.x (mgr.14150) 350 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:49.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:49 vm03 bash[17524]: audit 2026-03-09T14:24:47.743191+0000 mgr.x (mgr.14150) 350 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:49.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:49 vm03 bash[17524]: audit 2026-03-09T14:24:47.743191+0000 mgr.x (mgr.14150) 350 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:50.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:50 vm04 bash[19581]: audit 2026-03-09T14:24:48.705198+0000 mgr.x (mgr.14150) 351 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:50.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:50 vm04 bash[19581]: audit 2026-03-09T14:24:48.705198+0000 mgr.x (mgr.14150) 351 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:50.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:50 vm04 bash[19581]: cluster 2026-03-09T14:24:48.967865+0000 mgr.x (mgr.14150) 352 : cluster [DBG] pgmap v290: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:50.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:50 vm04 bash[19581]: cluster 2026-03-09T14:24:48.967865+0000 mgr.x (mgr.14150) 352 : cluster [DBG] pgmap v290: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:50.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:50 vm05 bash[20070]: audit 2026-03-09T14:24:48.705198+0000 mgr.x (mgr.14150) 351 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:50.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:50 vm05 bash[20070]: audit 2026-03-09T14:24:48.705198+0000 mgr.x (mgr.14150) 351 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:50.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:50 vm05 bash[20070]: cluster 2026-03-09T14:24:48.967865+0000 mgr.x (mgr.14150) 352 : cluster [DBG] pgmap v290: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:50.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:50 vm05 bash[20070]: cluster 2026-03-09T14:24:48.967865+0000 mgr.x (mgr.14150) 352 : cluster [DBG] pgmap v290: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:50.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:50 vm03 bash[17524]: audit 2026-03-09T14:24:48.705198+0000 mgr.x (mgr.14150) 351 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:50.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:50 vm03 bash[17524]: audit 2026-03-09T14:24:48.705198+0000 mgr.x (mgr.14150) 351 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:50.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:50 vm03 bash[17524]: cluster 2026-03-09T14:24:48.967865+0000 mgr.x (mgr.14150) 352 : cluster [DBG] pgmap v290: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:50.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:50 vm03 bash[17524]: cluster 2026-03-09T14:24:48.967865+0000 mgr.x (mgr.14150) 352 : cluster [DBG] pgmap v290: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:52.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:52 vm04 bash[19581]: cluster 2026-03-09T14:24:50.968124+0000 mgr.x (mgr.14150) 353 : cluster [DBG] pgmap v291: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:52.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:52 vm04 bash[19581]: cluster 2026-03-09T14:24:50.968124+0000 mgr.x (mgr.14150) 353 : cluster [DBG] pgmap v291: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:52.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:52 vm05 bash[20070]: cluster 2026-03-09T14:24:50.968124+0000 mgr.x (mgr.14150) 353 : cluster [DBG] pgmap v291: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:52.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:52 vm05 bash[20070]: cluster 2026-03-09T14:24:50.968124+0000 mgr.x (mgr.14150) 353 : cluster [DBG] pgmap v291: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:52.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:52 vm03 bash[17524]: cluster 2026-03-09T14:24:50.968124+0000 mgr.x (mgr.14150) 353 : cluster [DBG] pgmap v291: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:52.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:52 vm03 bash[17524]: cluster 2026-03-09T14:24:50.968124+0000 mgr.x (mgr.14150) 353 : cluster [DBG] pgmap v291: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:54.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:54 vm04 bash[19581]: cluster 2026-03-09T14:24:52.968356+0000 mgr.x (mgr.14150) 354 : cluster [DBG] pgmap v292: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:54.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:54 vm04 bash[19581]: cluster 2026-03-09T14:24:52.968356+0000 mgr.x (mgr.14150) 354 : cluster [DBG] pgmap v292: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:54.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:54 vm04 bash[19581]: audit 2026-03-09T14:24:53.852033+0000 mon.a (mon.0) 738 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:24:54.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:54 vm04 bash[19581]: audit 2026-03-09T14:24:53.852033+0000 mon.a (mon.0) 738 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:24:54.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:54 vm04 bash[19581]: audit 2026-03-09T14:24:54.187987+0000 mon.a (mon.0) 739 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:24:54.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:54 vm04 bash[19581]: audit 2026-03-09T14:24:54.187987+0000 mon.a (mon.0) 739 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:24:54.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:54 vm04 bash[19581]: audit 2026-03-09T14:24:54.188588+0000 mon.a (mon.0) 740 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:24:54.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:54 vm04 bash[19581]: audit 2026-03-09T14:24:54.188588+0000 mon.a (mon.0) 740 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:24:54.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:54 vm04 bash[19581]: audit 2026-03-09T14:24:54.193065+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:24:54.758 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:54 vm04 bash[19581]: audit 2026-03-09T14:24:54.193065+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:24:54.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:54 vm05 bash[20070]: cluster 2026-03-09T14:24:52.968356+0000 mgr.x (mgr.14150) 354 : cluster [DBG] pgmap v292: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:54.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:54 vm05 bash[20070]: cluster 2026-03-09T14:24:52.968356+0000 mgr.x (mgr.14150) 354 : cluster [DBG] pgmap v292: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:54.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:54 vm05 bash[20070]: audit 2026-03-09T14:24:53.852033+0000 mon.a (mon.0) 738 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:24:54.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:54 vm05 bash[20070]: audit 2026-03-09T14:24:53.852033+0000 mon.a (mon.0) 738 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:24:54.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:54 vm05 bash[20070]: audit 2026-03-09T14:24:54.187987+0000 mon.a (mon.0) 739 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:24:54.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:54 vm05 bash[20070]: audit 2026-03-09T14:24:54.187987+0000 mon.a (mon.0) 739 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:24:54.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:54 vm05 bash[20070]: audit 2026-03-09T14:24:54.188588+0000 mon.a (mon.0) 740 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:24:54.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:54 vm05 bash[20070]: audit 2026-03-09T14:24:54.188588+0000 mon.a (mon.0) 740 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:24:54.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:54 vm05 bash[20070]: audit 2026-03-09T14:24:54.193065+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:24:54.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:54 vm05 bash[20070]: audit 2026-03-09T14:24:54.193065+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:24:54.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:54 vm03 bash[17524]: cluster 2026-03-09T14:24:52.968356+0000 mgr.x (mgr.14150) 354 : cluster [DBG] pgmap v292: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:54.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:54 vm03 bash[17524]: cluster 2026-03-09T14:24:52.968356+0000 mgr.x (mgr.14150) 354 : cluster [DBG] pgmap v292: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:54.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:54 vm03 bash[17524]: audit 2026-03-09T14:24:53.852033+0000 mon.a (mon.0) 738 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:24:54.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:54 vm03 bash[17524]: audit 2026-03-09T14:24:53.852033+0000 mon.a (mon.0) 738 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:24:54.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:54 vm03 bash[17524]: audit 2026-03-09T14:24:54.187987+0000 mon.a (mon.0) 739 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:24:54.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:54 vm03 bash[17524]: audit 2026-03-09T14:24:54.187987+0000 mon.a (mon.0) 739 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:24:54.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:54 vm03 bash[17524]: audit 2026-03-09T14:24:54.188588+0000 mon.a (mon.0) 740 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:24:54.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:54 vm03 bash[17524]: audit 2026-03-09T14:24:54.188588+0000 mon.a (mon.0) 740 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:24:54.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:54 vm03 bash[17524]: audit 2026-03-09T14:24:54.193065+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:24:54.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:54 vm03 bash[17524]: audit 2026-03-09T14:24:54.193065+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:24:56.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:56 vm03 bash[17524]: cluster 2026-03-09T14:24:54.968589+0000 mgr.x (mgr.14150) 355 : cluster [DBG] pgmap v293: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:56.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:56 vm03 bash[17524]: cluster 2026-03-09T14:24:54.968589+0000 mgr.x (mgr.14150) 355 : cluster [DBG] pgmap v293: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:57.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:56 vm04 bash[19581]: cluster 2026-03-09T14:24:54.968589+0000 mgr.x (mgr.14150) 355 : cluster [DBG] pgmap v293: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:57.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:56 vm04 bash[19581]: cluster 2026-03-09T14:24:54.968589+0000 mgr.x (mgr.14150) 355 : cluster [DBG] pgmap v293: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:57.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:56 vm05 bash[20070]: cluster 2026-03-09T14:24:54.968589+0000 mgr.x (mgr.14150) 355 : cluster [DBG] pgmap v293: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:57.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:56 vm05 bash[20070]: cluster 2026-03-09T14:24:54.968589+0000 mgr.x (mgr.14150) 355 : cluster [DBG] pgmap v293: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:24:58.053 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:24:57 vm03 bash[37744]: debug there is no tcmu-runner data available 2026-03-09T14:24:58.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:58 vm03 bash[17524]: cluster 2026-03-09T14:24:56.968862+0000 mgr.x (mgr.14150) 356 : cluster [DBG] pgmap v294: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:58.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:58 vm03 bash[17524]: cluster 2026-03-09T14:24:56.968862+0000 mgr.x (mgr.14150) 356 : cluster [DBG] pgmap v294: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:59.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:58 vm04 bash[19581]: cluster 2026-03-09T14:24:56.968862+0000 mgr.x (mgr.14150) 356 : cluster [DBG] pgmap v294: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:59.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:58 vm04 bash[19581]: cluster 2026-03-09T14:24:56.968862+0000 mgr.x (mgr.14150) 356 : cluster [DBG] pgmap v294: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:59.008 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:24:58 vm05 bash[38699]: debug there is no tcmu-runner data available 2026-03-09T14:24:59.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:58 vm05 bash[20070]: cluster 2026-03-09T14:24:56.968862+0000 mgr.x (mgr.14150) 356 : cluster [DBG] pgmap v294: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:59.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:58 vm05 bash[20070]: cluster 2026-03-09T14:24:56.968862+0000 mgr.x (mgr.14150) 356 : cluster [DBG] pgmap v294: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:24:59.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:59 vm03 bash[17524]: audit 2026-03-09T14:24:57.749521+0000 mgr.x (mgr.14150) 357 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:24:59.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:24:59 vm03 bash[17524]: audit 2026-03-09T14:24:57.749521+0000 mgr.x (mgr.14150) 357 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:00.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:59 vm04 bash[19581]: audit 2026-03-09T14:24:57.749521+0000 mgr.x (mgr.14150) 357 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:00.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:24:59 vm04 bash[19581]: audit 2026-03-09T14:24:57.749521+0000 mgr.x (mgr.14150) 357 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:00.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:59 vm05 bash[20070]: audit 2026-03-09T14:24:57.749521+0000 mgr.x (mgr.14150) 357 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:00.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:24:59 vm05 bash[20070]: audit 2026-03-09T14:24:57.749521+0000 mgr.x (mgr.14150) 357 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:01.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:00 vm04 bash[19581]: audit 2026-03-09T14:24:58.715801+0000 mgr.x (mgr.14150) 358 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:01.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:00 vm04 bash[19581]: audit 2026-03-09T14:24:58.715801+0000 mgr.x (mgr.14150) 358 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:01.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:00 vm04 bash[19581]: cluster 2026-03-09T14:24:58.969098+0000 mgr.x (mgr.14150) 359 : cluster [DBG] pgmap v295: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:01.008 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:00 vm04 bash[19581]: cluster 2026-03-09T14:24:58.969098+0000 mgr.x (mgr.14150) 359 : cluster [DBG] pgmap v295: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:01.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:00 vm05 bash[20070]: audit 2026-03-09T14:24:58.715801+0000 mgr.x (mgr.14150) 358 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:01.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:00 vm05 bash[20070]: audit 2026-03-09T14:24:58.715801+0000 mgr.x (mgr.14150) 358 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:01.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:00 vm05 bash[20070]: cluster 2026-03-09T14:24:58.969098+0000 mgr.x (mgr.14150) 359 : cluster [DBG] pgmap v295: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:01.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:00 vm05 bash[20070]: cluster 2026-03-09T14:24:58.969098+0000 mgr.x (mgr.14150) 359 : cluster [DBG] pgmap v295: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:01.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:00 vm03 bash[17524]: audit 2026-03-09T14:24:58.715801+0000 mgr.x (mgr.14150) 358 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:01.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:00 vm03 bash[17524]: audit 2026-03-09T14:24:58.715801+0000 mgr.x (mgr.14150) 358 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:01.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:00 vm03 bash[17524]: cluster 2026-03-09T14:24:58.969098+0000 mgr.x (mgr.14150) 359 : cluster [DBG] pgmap v295: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:01.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:00 vm03 bash[17524]: cluster 2026-03-09T14:24:58.969098+0000 mgr.x (mgr.14150) 359 : cluster [DBG] pgmap v295: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:02.258 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:02 vm05 bash[20070]: cluster 2026-03-09T14:25:00.969472+0000 mgr.x (mgr.14150) 360 : cluster [DBG] pgmap v296: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:02.258 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:02 vm05 bash[20070]: cluster 2026-03-09T14:25:00.969472+0000 mgr.x (mgr.14150) 360 : cluster [DBG] pgmap v296: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:02.303 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:01 vm03 bash[17524]: cluster 2026-03-09T14:25:00.969472+0000 mgr.x (mgr.14150) 360 : cluster [DBG] pgmap v296: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:02.325 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:01 vm03 bash[17524]: cluster 2026-03-09T14:25:00.969472+0000 mgr.x (mgr.14150) 360 : cluster [DBG] pgmap v296: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:02.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:02 vm04 bash[19581]: cluster 2026-03-09T14:25:00.969472+0000 mgr.x (mgr.14150) 360 : cluster [DBG] pgmap v296: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:02.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:02 vm04 bash[19581]: cluster 2026-03-09T14:25:00.969472+0000 mgr.x (mgr.14150) 360 : cluster [DBG] pgmap v296: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:04.257 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:03 vm04 bash[19581]: cluster 2026-03-09T14:25:02.969725+0000 mgr.x (mgr.14150) 361 : cluster [DBG] pgmap v297: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:04.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:03 vm04 bash[19581]: cluster 2026-03-09T14:25:02.969725+0000 mgr.x (mgr.14150) 361 : cluster [DBG] pgmap v297: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:04.258 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:03 vm05 bash[20070]: cluster 2026-03-09T14:25:02.969725+0000 mgr.x (mgr.14150) 361 : cluster [DBG] pgmap v297: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:04.258 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:03 vm05 bash[20070]: cluster 2026-03-09T14:25:02.969725+0000 mgr.x (mgr.14150) 361 : cluster [DBG] pgmap v297: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:04.303 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:03 vm03 bash[17524]: cluster 2026-03-09T14:25:02.969725+0000 mgr.x (mgr.14150) 361 : cluster [DBG] pgmap v297: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:04.303 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:03 vm03 bash[17524]: cluster 2026-03-09T14:25:02.969725+0000 mgr.x (mgr.14150) 361 : cluster [DBG] pgmap v297: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:06.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:06 vm04 bash[19581]: cluster 2026-03-09T14:25:04.969990+0000 mgr.x (mgr.14150) 362 : cluster [DBG] pgmap v298: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:06.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:06 vm04 bash[19581]: cluster 2026-03-09T14:25:04.969990+0000 mgr.x (mgr.14150) 362 : cluster [DBG] pgmap v298: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:06.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:06 vm05 bash[20070]: cluster 2026-03-09T14:25:04.969990+0000 mgr.x (mgr.14150) 362 : cluster [DBG] pgmap v298: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:06.508 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:06 vm05 bash[20070]: cluster 2026-03-09T14:25:04.969990+0000 mgr.x (mgr.14150) 362 : cluster [DBG] pgmap v298: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:06.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:06 vm03 bash[17524]: cluster 2026-03-09T14:25:04.969990+0000 mgr.x (mgr.14150) 362 : cluster [DBG] pgmap v298: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:06.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:06 vm03 bash[17524]: cluster 2026-03-09T14:25:04.969990+0000 mgr.x (mgr.14150) 362 : cluster [DBG] pgmap v298: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:08.053 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:25:07 vm03 bash[37744]: debug there is no tcmu-runner data available 2026-03-09T14:25:08.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:08 vm03 bash[17524]: cluster 2026-03-09T14:25:06.970255+0000 mgr.x (mgr.14150) 363 : cluster [DBG] pgmap v299: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:08.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:08 vm03 bash[17524]: cluster 2026-03-09T14:25:06.970255+0000 mgr.x (mgr.14150) 363 : cluster [DBG] pgmap v299: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:08.726 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:08 vm05 bash[20070]: cluster 2026-03-09T14:25:06.970255+0000 mgr.x (mgr.14150) 363 : cluster [DBG] pgmap v299: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:08.726 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:08 vm05 bash[20070]: cluster 2026-03-09T14:25:06.970255+0000 mgr.x (mgr.14150) 363 : cluster [DBG] pgmap v299: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:08.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:08 vm04 bash[19581]: cluster 2026-03-09T14:25:06.970255+0000 mgr.x (mgr.14150) 363 : cluster [DBG] pgmap v299: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:08.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:08 vm04 bash[19581]: cluster 2026-03-09T14:25:06.970255+0000 mgr.x (mgr.14150) 363 : cluster [DBG] pgmap v299: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:09.007 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:25:08 vm05 bash[38699]: debug there is no tcmu-runner data available 2026-03-09T14:25:09.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:09 vm03 bash[17524]: audit 2026-03-09T14:25:07.754671+0000 mgr.x (mgr.14150) 364 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:09.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:09 vm03 bash[17524]: audit 2026-03-09T14:25:07.754671+0000 mgr.x (mgr.14150) 364 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:09.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:09 vm04 bash[19581]: audit 2026-03-09T14:25:07.754671+0000 mgr.x (mgr.14150) 364 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:09.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:09 vm04 bash[19581]: audit 2026-03-09T14:25:07.754671+0000 mgr.x (mgr.14150) 364 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:09.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:09 vm05 bash[20070]: audit 2026-03-09T14:25:07.754671+0000 mgr.x (mgr.14150) 364 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:09.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:09 vm05 bash[20070]: audit 2026-03-09T14:25:07.754671+0000 mgr.x (mgr.14150) 364 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:10.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:10 vm04 bash[19581]: audit 2026-03-09T14:25:08.726385+0000 mgr.x (mgr.14150) 365 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:10.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:10 vm04 bash[19581]: audit 2026-03-09T14:25:08.726385+0000 mgr.x (mgr.14150) 365 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:10.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:10 vm04 bash[19581]: cluster 2026-03-09T14:25:08.970500+0000 mgr.x (mgr.14150) 366 : cluster [DBG] pgmap v300: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:10.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:10 vm04 bash[19581]: cluster 2026-03-09T14:25:08.970500+0000 mgr.x (mgr.14150) 366 : cluster [DBG] pgmap v300: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:10.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:10 vm05 bash[20070]: audit 2026-03-09T14:25:08.726385+0000 mgr.x (mgr.14150) 365 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:10.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:10 vm05 bash[20070]: audit 2026-03-09T14:25:08.726385+0000 mgr.x (mgr.14150) 365 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:10.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:10 vm05 bash[20070]: cluster 2026-03-09T14:25:08.970500+0000 mgr.x (mgr.14150) 366 : cluster [DBG] pgmap v300: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:10.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:10 vm05 bash[20070]: cluster 2026-03-09T14:25:08.970500+0000 mgr.x (mgr.14150) 366 : cluster [DBG] pgmap v300: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:10.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:10 vm03 bash[17524]: audit 2026-03-09T14:25:08.726385+0000 mgr.x (mgr.14150) 365 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:10.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:10 vm03 bash[17524]: audit 2026-03-09T14:25:08.726385+0000 mgr.x (mgr.14150) 365 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:10.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:10 vm03 bash[17524]: cluster 2026-03-09T14:25:08.970500+0000 mgr.x (mgr.14150) 366 : cluster [DBG] pgmap v300: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:10.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:10 vm03 bash[17524]: cluster 2026-03-09T14:25:08.970500+0000 mgr.x (mgr.14150) 366 : cluster [DBG] pgmap v300: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:12.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:12 vm04 bash[19581]: cluster 2026-03-09T14:25:10.970757+0000 mgr.x (mgr.14150) 367 : cluster [DBG] pgmap v301: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:12.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:12 vm04 bash[19581]: cluster 2026-03-09T14:25:10.970757+0000 mgr.x (mgr.14150) 367 : cluster [DBG] pgmap v301: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:12.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:12 vm05 bash[20070]: cluster 2026-03-09T14:25:10.970757+0000 mgr.x (mgr.14150) 367 : cluster [DBG] pgmap v301: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:12.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:12 vm05 bash[20070]: cluster 2026-03-09T14:25:10.970757+0000 mgr.x (mgr.14150) 367 : cluster [DBG] pgmap v301: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:12.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:12 vm03 bash[17524]: cluster 2026-03-09T14:25:10.970757+0000 mgr.x (mgr.14150) 367 : cluster [DBG] pgmap v301: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:12.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:12 vm03 bash[17524]: cluster 2026-03-09T14:25:10.970757+0000 mgr.x (mgr.14150) 367 : cluster [DBG] pgmap v301: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:14.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:14 vm04 bash[19581]: cluster 2026-03-09T14:25:12.970997+0000 mgr.x (mgr.14150) 368 : cluster [DBG] pgmap v302: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:14.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:14 vm04 bash[19581]: cluster 2026-03-09T14:25:12.970997+0000 mgr.x (mgr.14150) 368 : cluster [DBG] pgmap v302: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:14.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:14 vm05 bash[20070]: cluster 2026-03-09T14:25:12.970997+0000 mgr.x (mgr.14150) 368 : cluster [DBG] pgmap v302: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:14.758 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:14 vm05 bash[20070]: cluster 2026-03-09T14:25:12.970997+0000 mgr.x (mgr.14150) 368 : cluster [DBG] pgmap v302: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:14.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:14 vm03 bash[17524]: cluster 2026-03-09T14:25:12.970997+0000 mgr.x (mgr.14150) 368 : cluster [DBG] pgmap v302: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:14.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:14 vm03 bash[17524]: cluster 2026-03-09T14:25:12.970997+0000 mgr.x (mgr.14150) 368 : cluster [DBG] pgmap v302: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:16.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:16 vm04 bash[19581]: cluster 2026-03-09T14:25:14.971263+0000 mgr.x (mgr.14150) 369 : cluster [DBG] pgmap v303: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:16.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:16 vm04 bash[19581]: cluster 2026-03-09T14:25:14.971263+0000 mgr.x (mgr.14150) 369 : cluster [DBG] pgmap v303: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:16.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:16 vm05 bash[20070]: cluster 2026-03-09T14:25:14.971263+0000 mgr.x (mgr.14150) 369 : cluster [DBG] pgmap v303: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:16.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:16 vm05 bash[20070]: cluster 2026-03-09T14:25:14.971263+0000 mgr.x (mgr.14150) 369 : cluster [DBG] pgmap v303: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:16.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:16 vm03 bash[17524]: cluster 2026-03-09T14:25:14.971263+0000 mgr.x (mgr.14150) 369 : cluster [DBG] pgmap v303: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:16.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:16 vm03 bash[17524]: cluster 2026-03-09T14:25:14.971263+0000 mgr.x (mgr.14150) 369 : cluster [DBG] pgmap v303: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:18.053 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:25:17 vm03 bash[37744]: debug there is no tcmu-runner data available 2026-03-09T14:25:18.736 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:18 vm05 bash[20070]: cluster 2026-03-09T14:25:16.971535+0000 mgr.x (mgr.14150) 370 : cluster [DBG] pgmap v304: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:18.736 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:18 vm05 bash[20070]: cluster 2026-03-09T14:25:16.971535+0000 mgr.x (mgr.14150) 370 : cluster [DBG] pgmap v304: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:18.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:18 vm04 bash[19581]: cluster 2026-03-09T14:25:16.971535+0000 mgr.x (mgr.14150) 370 : cluster [DBG] pgmap v304: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:18.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:18 vm04 bash[19581]: cluster 2026-03-09T14:25:16.971535+0000 mgr.x (mgr.14150) 370 : cluster [DBG] pgmap v304: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:18.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:18 vm03 bash[17524]: cluster 2026-03-09T14:25:16.971535+0000 mgr.x (mgr.14150) 370 : cluster [DBG] pgmap v304: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:18.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:18 vm03 bash[17524]: cluster 2026-03-09T14:25:16.971535+0000 mgr.x (mgr.14150) 370 : cluster [DBG] pgmap v304: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:19.007 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:25:18 vm05 bash[38699]: debug there is no tcmu-runner data available 2026-03-09T14:25:19.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:19 vm05 bash[20070]: audit 2026-03-09T14:25:17.761548+0000 mgr.x (mgr.14150) 371 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:19.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:19 vm05 bash[20070]: audit 2026-03-09T14:25:17.761548+0000 mgr.x (mgr.14150) 371 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:19.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:19 vm04 bash[19581]: audit 2026-03-09T14:25:17.761548+0000 mgr.x (mgr.14150) 371 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:19.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:19 vm04 bash[19581]: audit 2026-03-09T14:25:17.761548+0000 mgr.x (mgr.14150) 371 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:19.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:19 vm03 bash[17524]: audit 2026-03-09T14:25:17.761548+0000 mgr.x (mgr.14150) 371 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:19.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:19 vm03 bash[17524]: audit 2026-03-09T14:25:17.761548+0000 mgr.x (mgr.14150) 371 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:20.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:20 vm05 bash[20070]: audit 2026-03-09T14:25:18.736944+0000 mgr.x (mgr.14150) 372 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:20.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:20 vm05 bash[20070]: audit 2026-03-09T14:25:18.736944+0000 mgr.x (mgr.14150) 372 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:20.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:20 vm05 bash[20070]: cluster 2026-03-09T14:25:18.971794+0000 mgr.x (mgr.14150) 373 : cluster [DBG] pgmap v305: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:20.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:20 vm05 bash[20070]: cluster 2026-03-09T14:25:18.971794+0000 mgr.x (mgr.14150) 373 : cluster [DBG] pgmap v305: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:20.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:20 vm04 bash[19581]: audit 2026-03-09T14:25:18.736944+0000 mgr.x (mgr.14150) 372 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:20.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:20 vm04 bash[19581]: audit 2026-03-09T14:25:18.736944+0000 mgr.x (mgr.14150) 372 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:20.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:20 vm04 bash[19581]: cluster 2026-03-09T14:25:18.971794+0000 mgr.x (mgr.14150) 373 : cluster [DBG] pgmap v305: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:20.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:20 vm04 bash[19581]: cluster 2026-03-09T14:25:18.971794+0000 mgr.x (mgr.14150) 373 : cluster [DBG] pgmap v305: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:20.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:20 vm03 bash[17524]: audit 2026-03-09T14:25:18.736944+0000 mgr.x (mgr.14150) 372 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:20.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:20 vm03 bash[17524]: audit 2026-03-09T14:25:18.736944+0000 mgr.x (mgr.14150) 372 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:20.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:20 vm03 bash[17524]: cluster 2026-03-09T14:25:18.971794+0000 mgr.x (mgr.14150) 373 : cluster [DBG] pgmap v305: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:20.803 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:20 vm03 bash[17524]: cluster 2026-03-09T14:25:18.971794+0000 mgr.x (mgr.14150) 373 : cluster [DBG] pgmap v305: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:23.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:22 vm04 bash[19581]: cluster 2026-03-09T14:25:20.972096+0000 mgr.x (mgr.14150) 374 : cluster [DBG] pgmap v306: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:23.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:22 vm04 bash[19581]: cluster 2026-03-09T14:25:20.972096+0000 mgr.x (mgr.14150) 374 : cluster [DBG] pgmap v306: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:23.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:22 vm05 bash[20070]: cluster 2026-03-09T14:25:20.972096+0000 mgr.x (mgr.14150) 374 : cluster [DBG] pgmap v306: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:23.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:22 vm05 bash[20070]: cluster 2026-03-09T14:25:20.972096+0000 mgr.x (mgr.14150) 374 : cluster [DBG] pgmap v306: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:23.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:22 vm03 bash[17524]: cluster 2026-03-09T14:25:20.972096+0000 mgr.x (mgr.14150) 374 : cluster [DBG] pgmap v306: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:23.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:22 vm03 bash[17524]: cluster 2026-03-09T14:25:20.972096+0000 mgr.x (mgr.14150) 374 : cluster [DBG] pgmap v306: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:24.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:23 vm04 bash[19581]: cluster 2026-03-09T14:25:22.972553+0000 mgr.x (mgr.14150) 375 : cluster [DBG] pgmap v307: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:24.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:23 vm04 bash[19581]: cluster 2026-03-09T14:25:22.972553+0000 mgr.x (mgr.14150) 375 : cluster [DBG] pgmap v307: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:24.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:23 vm05 bash[20070]: cluster 2026-03-09T14:25:22.972553+0000 mgr.x (mgr.14150) 375 : cluster [DBG] pgmap v307: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:24.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:23 vm05 bash[20070]: cluster 2026-03-09T14:25:22.972553+0000 mgr.x (mgr.14150) 375 : cluster [DBG] pgmap v307: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:24.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:23 vm03 bash[17524]: cluster 2026-03-09T14:25:22.972553+0000 mgr.x (mgr.14150) 375 : cluster [DBG] pgmap v307: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:24.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:23 vm03 bash[17524]: cluster 2026-03-09T14:25:22.972553+0000 mgr.x (mgr.14150) 375 : cluster [DBG] pgmap v307: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:26.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:26 vm04 bash[19581]: cluster 2026-03-09T14:25:24.972820+0000 mgr.x (mgr.14150) 376 : cluster [DBG] pgmap v308: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:26.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:26 vm04 bash[19581]: cluster 2026-03-09T14:25:24.972820+0000 mgr.x (mgr.14150) 376 : cluster [DBG] pgmap v308: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:26.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:26 vm05 bash[20070]: cluster 2026-03-09T14:25:24.972820+0000 mgr.x (mgr.14150) 376 : cluster [DBG] pgmap v308: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:26.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:26 vm05 bash[20070]: cluster 2026-03-09T14:25:24.972820+0000 mgr.x (mgr.14150) 376 : cluster [DBG] pgmap v308: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:26.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:26 vm03 bash[17524]: cluster 2026-03-09T14:25:24.972820+0000 mgr.x (mgr.14150) 376 : cluster [DBG] pgmap v308: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:26.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:26 vm03 bash[17524]: cluster 2026-03-09T14:25:24.972820+0000 mgr.x (mgr.14150) 376 : cluster [DBG] pgmap v308: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:28.052 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:25:27 vm03 bash[37744]: debug there is no tcmu-runner data available 2026-03-09T14:25:28.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:28 vm04 bash[19581]: cluster 2026-03-09T14:25:26.973104+0000 mgr.x (mgr.14150) 377 : cluster [DBG] pgmap v309: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:28.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:28 vm04 bash[19581]: cluster 2026-03-09T14:25:26.973104+0000 mgr.x (mgr.14150) 377 : cluster [DBG] pgmap v309: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:28.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:28 vm05 bash[20070]: cluster 2026-03-09T14:25:26.973104+0000 mgr.x (mgr.14150) 377 : cluster [DBG] pgmap v309: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:28.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:28 vm05 bash[20070]: cluster 2026-03-09T14:25:26.973104+0000 mgr.x (mgr.14150) 377 : cluster [DBG] pgmap v309: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:28.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:28 vm03 bash[17524]: cluster 2026-03-09T14:25:26.973104+0000 mgr.x (mgr.14150) 377 : cluster [DBG] pgmap v309: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:28.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:28 vm03 bash[17524]: cluster 2026-03-09T14:25:26.973104+0000 mgr.x (mgr.14150) 377 : cluster [DBG] pgmap v309: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:29.007 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:25:28 vm05 bash[38699]: debug there is no tcmu-runner data available 2026-03-09T14:25:29.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:29 vm03 bash[17524]: audit 2026-03-09T14:25:27.769581+0000 mgr.x (mgr.14150) 378 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:29.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:29 vm03 bash[17524]: audit 2026-03-09T14:25:27.769581+0000 mgr.x (mgr.14150) 378 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:29.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:29 vm04 bash[19581]: audit 2026-03-09T14:25:27.769581+0000 mgr.x (mgr.14150) 378 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:29.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:29 vm04 bash[19581]: audit 2026-03-09T14:25:27.769581+0000 mgr.x (mgr.14150) 378 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:29.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:29 vm05 bash[20070]: audit 2026-03-09T14:25:27.769581+0000 mgr.x (mgr.14150) 378 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:29.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:29 vm05 bash[20070]: audit 2026-03-09T14:25:27.769581+0000 mgr.x (mgr.14150) 378 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:30.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:30 vm03 bash[17524]: audit 2026-03-09T14:25:28.747043+0000 mgr.x (mgr.14150) 379 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:30.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:30 vm03 bash[17524]: audit 2026-03-09T14:25:28.747043+0000 mgr.x (mgr.14150) 379 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:30.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:30 vm03 bash[17524]: cluster 2026-03-09T14:25:28.973419+0000 mgr.x (mgr.14150) 380 : cluster [DBG] pgmap v310: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:30.553 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:30 vm03 bash[17524]: cluster 2026-03-09T14:25:28.973419+0000 mgr.x (mgr.14150) 380 : cluster [DBG] pgmap v310: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:30.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:30 vm04 bash[19581]: audit 2026-03-09T14:25:28.747043+0000 mgr.x (mgr.14150) 379 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:30.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:30 vm04 bash[19581]: audit 2026-03-09T14:25:28.747043+0000 mgr.x (mgr.14150) 379 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:30.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:30 vm04 bash[19581]: cluster 2026-03-09T14:25:28.973419+0000 mgr.x (mgr.14150) 380 : cluster [DBG] pgmap v310: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:30.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:30 vm04 bash[19581]: cluster 2026-03-09T14:25:28.973419+0000 mgr.x (mgr.14150) 380 : cluster [DBG] pgmap v310: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:30.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:30 vm05 bash[20070]: audit 2026-03-09T14:25:28.747043+0000 mgr.x (mgr.14150) 379 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:30.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:30 vm05 bash[20070]: audit 2026-03-09T14:25:28.747043+0000 mgr.x (mgr.14150) 379 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:30.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:30 vm05 bash[20070]: cluster 2026-03-09T14:25:28.973419+0000 mgr.x (mgr.14150) 380 : cluster [DBG] pgmap v310: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:30.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:30 vm05 bash[20070]: cluster 2026-03-09T14:25:28.973419+0000 mgr.x (mgr.14150) 380 : cluster [DBG] pgmap v310: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:32.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:32 vm04 bash[19581]: cluster 2026-03-09T14:25:30.973685+0000 mgr.x (mgr.14150) 381 : cluster [DBG] pgmap v311: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:32.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:32 vm04 bash[19581]: cluster 2026-03-09T14:25:30.973685+0000 mgr.x (mgr.14150) 381 : cluster [DBG] pgmap v311: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:32.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:32 vm05 bash[20070]: cluster 2026-03-09T14:25:30.973685+0000 mgr.x (mgr.14150) 381 : cluster [DBG] pgmap v311: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:32.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:32 vm05 bash[20070]: cluster 2026-03-09T14:25:30.973685+0000 mgr.x (mgr.14150) 381 : cluster [DBG] pgmap v311: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:32.804 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:32 vm03 bash[17524]: cluster 2026-03-09T14:25:30.973685+0000 mgr.x (mgr.14150) 381 : cluster [DBG] pgmap v311: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:32.806 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:32 vm03 bash[17524]: cluster 2026-03-09T14:25:30.973685+0000 mgr.x (mgr.14150) 381 : cluster [DBG] pgmap v311: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:34.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:34 vm05 bash[20070]: cluster 2026-03-09T14:25:32.973926+0000 mgr.x (mgr.14150) 382 : cluster [DBG] pgmap v312: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:34.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:34 vm05 bash[20070]: cluster 2026-03-09T14:25:32.973926+0000 mgr.x (mgr.14150) 382 : cluster [DBG] pgmap v312: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:34.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:34 vm04 bash[19581]: cluster 2026-03-09T14:25:32.973926+0000 mgr.x (mgr.14150) 382 : cluster [DBG] pgmap v312: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:34.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:34 vm04 bash[19581]: cluster 2026-03-09T14:25:32.973926+0000 mgr.x (mgr.14150) 382 : cluster [DBG] pgmap v312: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:34.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:34 vm03 bash[17524]: cluster 2026-03-09T14:25:32.973926+0000 mgr.x (mgr.14150) 382 : cluster [DBG] pgmap v312: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:34.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:34 vm03 bash[17524]: cluster 2026-03-09T14:25:32.973926+0000 mgr.x (mgr.14150) 382 : cluster [DBG] pgmap v312: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:37.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:36 vm05 bash[20070]: cluster 2026-03-09T14:25:34.974212+0000 mgr.x (mgr.14150) 383 : cluster [DBG] pgmap v313: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:37.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:36 vm05 bash[20070]: cluster 2026-03-09T14:25:34.974212+0000 mgr.x (mgr.14150) 383 : cluster [DBG] pgmap v313: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:37.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:36 vm04 bash[19581]: cluster 2026-03-09T14:25:34.974212+0000 mgr.x (mgr.14150) 383 : cluster [DBG] pgmap v313: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:37.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:36 vm04 bash[19581]: cluster 2026-03-09T14:25:34.974212+0000 mgr.x (mgr.14150) 383 : cluster [DBG] pgmap v313: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:37.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:36 vm03 bash[17524]: cluster 2026-03-09T14:25:34.974212+0000 mgr.x (mgr.14150) 383 : cluster [DBG] pgmap v313: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:37.053 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:36 vm03 bash[17524]: cluster 2026-03-09T14:25:34.974212+0000 mgr.x (mgr.14150) 383 : cluster [DBG] pgmap v313: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:38.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:37 vm05 bash[20070]: cluster 2026-03-09T14:25:36.974474+0000 mgr.x (mgr.14150) 384 : cluster [DBG] pgmap v314: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:38.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:37 vm05 bash[20070]: cluster 2026-03-09T14:25:36.974474+0000 mgr.x (mgr.14150) 384 : cluster [DBG] pgmap v314: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:38.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:37 vm04 bash[19581]: cluster 2026-03-09T14:25:36.974474+0000 mgr.x (mgr.14150) 384 : cluster [DBG] pgmap v314: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:38.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:37 vm04 bash[19581]: cluster 2026-03-09T14:25:36.974474+0000 mgr.x (mgr.14150) 384 : cluster [DBG] pgmap v314: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:38.052 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:37 vm03 bash[17524]: cluster 2026-03-09T14:25:36.974474+0000 mgr.x (mgr.14150) 384 : cluster [DBG] pgmap v314: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:38.052 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:37 vm03 bash[17524]: cluster 2026-03-09T14:25:36.974474+0000 mgr.x (mgr.14150) 384 : cluster [DBG] pgmap v314: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:38.052 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:25:37 vm03 bash[37744]: debug there is no tcmu-runner data available 2026-03-09T14:25:39.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:38 vm05 bash[20070]: audit 2026-03-09T14:25:37.777530+0000 mgr.x (mgr.14150) 385 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:39.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:38 vm05 bash[20070]: audit 2026-03-09T14:25:37.777530+0000 mgr.x (mgr.14150) 385 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:39.007 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:25:38 vm05 bash[38699]: debug there is no tcmu-runner data available 2026-03-09T14:25:39.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:38 vm04 bash[19581]: audit 2026-03-09T14:25:37.777530+0000 mgr.x (mgr.14150) 385 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:39.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:38 vm04 bash[19581]: audit 2026-03-09T14:25:37.777530+0000 mgr.x (mgr.14150) 385 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:39.052 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:38 vm03 bash[17524]: audit 2026-03-09T14:25:37.777530+0000 mgr.x (mgr.14150) 385 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:39.052 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:38 vm03 bash[17524]: audit 2026-03-09T14:25:37.777530+0000 mgr.x (mgr.14150) 385 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:40.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:39 vm05 bash[20070]: audit 2026-03-09T14:25:38.751569+0000 mgr.x (mgr.14150) 386 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:40.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:39 vm05 bash[20070]: audit 2026-03-09T14:25:38.751569+0000 mgr.x (mgr.14150) 386 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:40.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:39 vm05 bash[20070]: cluster 2026-03-09T14:25:38.974741+0000 mgr.x (mgr.14150) 387 : cluster [DBG] pgmap v315: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:40.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:39 vm05 bash[20070]: cluster 2026-03-09T14:25:38.974741+0000 mgr.x (mgr.14150) 387 : cluster [DBG] pgmap v315: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:40.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:39 vm04 bash[19581]: audit 2026-03-09T14:25:38.751569+0000 mgr.x (mgr.14150) 386 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:40.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:39 vm04 bash[19581]: audit 2026-03-09T14:25:38.751569+0000 mgr.x (mgr.14150) 386 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:40.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:39 vm04 bash[19581]: cluster 2026-03-09T14:25:38.974741+0000 mgr.x (mgr.14150) 387 : cluster [DBG] pgmap v315: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:40.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:39 vm04 bash[19581]: cluster 2026-03-09T14:25:38.974741+0000 mgr.x (mgr.14150) 387 : cluster [DBG] pgmap v315: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:40.052 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:39 vm03 bash[17524]: audit 2026-03-09T14:25:38.751569+0000 mgr.x (mgr.14150) 386 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:40.052 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:39 vm03 bash[17524]: audit 2026-03-09T14:25:38.751569+0000 mgr.x (mgr.14150) 386 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:40.052 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:39 vm03 bash[17524]: cluster 2026-03-09T14:25:38.974741+0000 mgr.x (mgr.14150) 387 : cluster [DBG] pgmap v315: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:40.052 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:39 vm03 bash[17524]: cluster 2026-03-09T14:25:38.974741+0000 mgr.x (mgr.14150) 387 : cluster [DBG] pgmap v315: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:42.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:42 vm05 bash[20070]: cluster 2026-03-09T14:25:40.975004+0000 mgr.x (mgr.14150) 388 : cluster [DBG] pgmap v316: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:42.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:42 vm05 bash[20070]: cluster 2026-03-09T14:25:40.975004+0000 mgr.x (mgr.14150) 388 : cluster [DBG] pgmap v316: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:42.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:42 vm04 bash[19581]: cluster 2026-03-09T14:25:40.975004+0000 mgr.x (mgr.14150) 388 : cluster [DBG] pgmap v316: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:42.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:42 vm04 bash[19581]: cluster 2026-03-09T14:25:40.975004+0000 mgr.x (mgr.14150) 388 : cluster [DBG] pgmap v316: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:42.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:42 vm03 bash[17524]: cluster 2026-03-09T14:25:40.975004+0000 mgr.x (mgr.14150) 388 : cluster [DBG] pgmap v316: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:42.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:42 vm03 bash[17524]: cluster 2026-03-09T14:25:40.975004+0000 mgr.x (mgr.14150) 388 : cluster [DBG] pgmap v316: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:44.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:44 vm05 bash[20070]: cluster 2026-03-09T14:25:42.975278+0000 mgr.x (mgr.14150) 389 : cluster [DBG] pgmap v317: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:44.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:44 vm05 bash[20070]: cluster 2026-03-09T14:25:42.975278+0000 mgr.x (mgr.14150) 389 : cluster [DBG] pgmap v317: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:44.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:44 vm04 bash[19581]: cluster 2026-03-09T14:25:42.975278+0000 mgr.x (mgr.14150) 389 : cluster [DBG] pgmap v317: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:44.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:44 vm04 bash[19581]: cluster 2026-03-09T14:25:42.975278+0000 mgr.x (mgr.14150) 389 : cluster [DBG] pgmap v317: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:44.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:44 vm03 bash[17524]: cluster 2026-03-09T14:25:42.975278+0000 mgr.x (mgr.14150) 389 : cluster [DBG] pgmap v317: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:44.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:44 vm03 bash[17524]: cluster 2026-03-09T14:25:42.975278+0000 mgr.x (mgr.14150) 389 : cluster [DBG] pgmap v317: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:46.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:46 vm05 bash[20070]: cluster 2026-03-09T14:25:44.975566+0000 mgr.x (mgr.14150) 390 : cluster [DBG] pgmap v318: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:46.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:46 vm05 bash[20070]: cluster 2026-03-09T14:25:44.975566+0000 mgr.x (mgr.14150) 390 : cluster [DBG] pgmap v318: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:46.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:46 vm04 bash[19581]: cluster 2026-03-09T14:25:44.975566+0000 mgr.x (mgr.14150) 390 : cluster [DBG] pgmap v318: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:46.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:46 vm04 bash[19581]: cluster 2026-03-09T14:25:44.975566+0000 mgr.x (mgr.14150) 390 : cluster [DBG] pgmap v318: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:46.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:46 vm03 bash[17524]: cluster 2026-03-09T14:25:44.975566+0000 mgr.x (mgr.14150) 390 : cluster [DBG] pgmap v318: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:46.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:46 vm03 bash[17524]: cluster 2026-03-09T14:25:44.975566+0000 mgr.x (mgr.14150) 390 : cluster [DBG] pgmap v318: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:48.052 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:25:47 vm03 bash[37744]: debug there is no tcmu-runner data available 2026-03-09T14:25:48.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:48 vm05 bash[20070]: cluster 2026-03-09T14:25:46.975852+0000 mgr.x (mgr.14150) 391 : cluster [DBG] pgmap v319: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:48.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:48 vm05 bash[20070]: cluster 2026-03-09T14:25:46.975852+0000 mgr.x (mgr.14150) 391 : cluster [DBG] pgmap v319: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:48.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:48 vm04 bash[19581]: cluster 2026-03-09T14:25:46.975852+0000 mgr.x (mgr.14150) 391 : cluster [DBG] pgmap v319: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:48.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:48 vm04 bash[19581]: cluster 2026-03-09T14:25:46.975852+0000 mgr.x (mgr.14150) 391 : cluster [DBG] pgmap v319: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:48.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:48 vm03 bash[17524]: cluster 2026-03-09T14:25:46.975852+0000 mgr.x (mgr.14150) 391 : cluster [DBG] pgmap v319: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:48.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:48 vm03 bash[17524]: cluster 2026-03-09T14:25:46.975852+0000 mgr.x (mgr.14150) 391 : cluster [DBG] pgmap v319: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:49.226 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:25:48 vm05 bash[38699]: debug there is no tcmu-runner data available 2026-03-09T14:25:49.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:49 vm05 bash[20070]: audit 2026-03-09T14:25:47.785542+0000 mgr.x (mgr.14150) 392 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:49.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:49 vm05 bash[20070]: audit 2026-03-09T14:25:47.785542+0000 mgr.x (mgr.14150) 392 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:49.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:49 vm04 bash[19581]: audit 2026-03-09T14:25:47.785542+0000 mgr.x (mgr.14150) 392 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:49.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:49 vm04 bash[19581]: audit 2026-03-09T14:25:47.785542+0000 mgr.x (mgr.14150) 392 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:49.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:49 vm03 bash[17524]: audit 2026-03-09T14:25:47.785542+0000 mgr.x (mgr.14150) 392 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:49.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:49 vm03 bash[17524]: audit 2026-03-09T14:25:47.785542+0000 mgr.x (mgr.14150) 392 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:50.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:50 vm05 bash[20070]: audit 2026-03-09T14:25:48.758683+0000 mgr.x (mgr.14150) 393 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:50.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:50 vm05 bash[20070]: audit 2026-03-09T14:25:48.758683+0000 mgr.x (mgr.14150) 393 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:50.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:50 vm05 bash[20070]: cluster 2026-03-09T14:25:48.976110+0000 mgr.x (mgr.14150) 394 : cluster [DBG] pgmap v320: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:50.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:50 vm05 bash[20070]: cluster 2026-03-09T14:25:48.976110+0000 mgr.x (mgr.14150) 394 : cluster [DBG] pgmap v320: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:50.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:50 vm04 bash[19581]: audit 2026-03-09T14:25:48.758683+0000 mgr.x (mgr.14150) 393 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:50.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:50 vm04 bash[19581]: audit 2026-03-09T14:25:48.758683+0000 mgr.x (mgr.14150) 393 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:50.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:50 vm04 bash[19581]: cluster 2026-03-09T14:25:48.976110+0000 mgr.x (mgr.14150) 394 : cluster [DBG] pgmap v320: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:50.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:50 vm04 bash[19581]: cluster 2026-03-09T14:25:48.976110+0000 mgr.x (mgr.14150) 394 : cluster [DBG] pgmap v320: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:50.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:50 vm03 bash[17524]: audit 2026-03-09T14:25:48.758683+0000 mgr.x (mgr.14150) 393 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:50.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:50 vm03 bash[17524]: audit 2026-03-09T14:25:48.758683+0000 mgr.x (mgr.14150) 393 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:25:50.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:50 vm03 bash[17524]: cluster 2026-03-09T14:25:48.976110+0000 mgr.x (mgr.14150) 394 : cluster [DBG] pgmap v320: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:50.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:50 vm03 bash[17524]: cluster 2026-03-09T14:25:48.976110+0000 mgr.x (mgr.14150) 394 : cluster [DBG] pgmap v320: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:52.642 INFO:teuthology.orchestra.run.vm03.stderr:Note: switching to '569c3e99c9b32a51b4eaf08731c728f4513ed589'. 2026-03-09T14:25:52.642 INFO:teuthology.orchestra.run.vm03.stderr: 2026-03-09T14:25:52.642 INFO:teuthology.orchestra.run.vm03.stderr:You are in 'detached HEAD' state. You can look around, make experimental 2026-03-09T14:25:52.642 INFO:teuthology.orchestra.run.vm03.stderr:changes and commit them, and you can discard any commits you make in this 2026-03-09T14:25:52.642 INFO:teuthology.orchestra.run.vm03.stderr:state without impacting any branches by switching back to a branch. 2026-03-09T14:25:52.643 INFO:teuthology.orchestra.run.vm03.stderr: 2026-03-09T14:25:52.643 INFO:teuthology.orchestra.run.vm03.stderr:If you want to create a new branch to retain commits you create, you may 2026-03-09T14:25:52.643 INFO:teuthology.orchestra.run.vm03.stderr:do so (now or later) by using -c with the switch command. Example: 2026-03-09T14:25:52.643 INFO:teuthology.orchestra.run.vm03.stderr: 2026-03-09T14:25:52.643 INFO:teuthology.orchestra.run.vm03.stderr: git switch -c 2026-03-09T14:25:52.643 INFO:teuthology.orchestra.run.vm03.stderr: 2026-03-09T14:25:52.643 INFO:teuthology.orchestra.run.vm03.stderr:Or undo this operation with: 2026-03-09T14:25:52.643 INFO:teuthology.orchestra.run.vm03.stderr: 2026-03-09T14:25:52.643 INFO:teuthology.orchestra.run.vm03.stderr: git switch - 2026-03-09T14:25:52.643 INFO:teuthology.orchestra.run.vm03.stderr: 2026-03-09T14:25:52.643 INFO:teuthology.orchestra.run.vm03.stderr:Turn off this advice by setting config variable advice.detachedHead to false 2026-03-09T14:25:52.643 INFO:teuthology.orchestra.run.vm03.stderr: 2026-03-09T14:25:52.643 INFO:teuthology.orchestra.run.vm03.stderr:HEAD is now at 569c3e99c9b qa/rgw: bucket notifications use pynose 2026-03-09T14:25:52.648 DEBUG:teuthology.orchestra.run.vm03:> cp -- /home/ubuntu/cephtest/clone.client.0/src/test/cli-integration/rbd/gwcli_create.t /home/ubuntu/cephtest/archive/cram.client.0 2026-03-09T14:25:52.697 DEBUG:teuthology.orchestra.run.vm04:> mkdir -- /home/ubuntu/cephtest/archive/cram.client.1 && python3 -m venv /home/ubuntu/cephtest/virtualenv && /home/ubuntu/cephtest/virtualenv/bin/pip install cram==0.6 2026-03-09T14:25:52.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:52 vm05 bash[20070]: cluster 2026-03-09T14:25:50.976375+0000 mgr.x (mgr.14150) 395 : cluster [DBG] pgmap v321: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:52.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:52 vm05 bash[20070]: cluster 2026-03-09T14:25:50.976375+0000 mgr.x (mgr.14150) 395 : cluster [DBG] pgmap v321: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:52.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:52 vm03 bash[17524]: cluster 2026-03-09T14:25:50.976375+0000 mgr.x (mgr.14150) 395 : cluster [DBG] pgmap v321: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:52.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:52 vm03 bash[17524]: cluster 2026-03-09T14:25:50.976375+0000 mgr.x (mgr.14150) 395 : cluster [DBG] pgmap v321: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:53.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:52 vm04 bash[19581]: cluster 2026-03-09T14:25:50.976375+0000 mgr.x (mgr.14150) 395 : cluster [DBG] pgmap v321: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:53.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:52 vm04 bash[19581]: cluster 2026-03-09T14:25:50.976375+0000 mgr.x (mgr.14150) 395 : cluster [DBG] pgmap v321: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:54.460 INFO:teuthology.orchestra.run.vm04.stdout:Collecting cram==0.6 2026-03-09T14:25:54.506 INFO:teuthology.orchestra.run.vm04.stdout: Downloading cram-0.6-py2.py3-none-any.whl (17 kB) 2026-03-09T14:25:54.530 INFO:teuthology.orchestra.run.vm04.stdout:Installing collected packages: cram 2026-03-09T14:25:54.536 INFO:teuthology.orchestra.run.vm04.stdout:Successfully installed cram-0.6 2026-03-09T14:25:54.585 DEBUG:teuthology.orchestra.run.vm04:> rm -rf /home/ubuntu/cephtest/clone.client.1 && git clone https://github.com/kshtsk/ceph.git /home/ubuntu/cephtest/clone.client.1 && cd /home/ubuntu/cephtest/clone.client.1 && git checkout 569c3e99c9b32a51b4eaf08731c728f4513ed589 2026-03-09T14:25:54.588 INFO:teuthology.orchestra.run.vm04.stderr:Cloning into '/home/ubuntu/cephtest/clone.client.1'... 2026-03-09T14:25:54.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:54 vm03 bash[17524]: cluster 2026-03-09T14:25:52.976685+0000 mgr.x (mgr.14150) 396 : cluster [DBG] pgmap v322: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:54.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:54 vm03 bash[17524]: cluster 2026-03-09T14:25:52.976685+0000 mgr.x (mgr.14150) 396 : cluster [DBG] pgmap v322: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:54.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:54 vm03 bash[17524]: audit 2026-03-09T14:25:54.209842+0000 mon.a (mon.0) 742 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:25:54.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:54 vm03 bash[17524]: audit 2026-03-09T14:25:54.209842+0000 mon.a (mon.0) 742 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:25:55.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:54 vm05 bash[20070]: cluster 2026-03-09T14:25:52.976685+0000 mgr.x (mgr.14150) 396 : cluster [DBG] pgmap v322: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:55.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:54 vm05 bash[20070]: cluster 2026-03-09T14:25:52.976685+0000 mgr.x (mgr.14150) 396 : cluster [DBG] pgmap v322: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:55.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:54 vm05 bash[20070]: audit 2026-03-09T14:25:54.209842+0000 mon.a (mon.0) 742 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:25:55.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:54 vm05 bash[20070]: audit 2026-03-09T14:25:54.209842+0000 mon.a (mon.0) 742 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:25:55.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:54 vm04 bash[19581]: cluster 2026-03-09T14:25:52.976685+0000 mgr.x (mgr.14150) 396 : cluster [DBG] pgmap v322: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:55.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:54 vm04 bash[19581]: cluster 2026-03-09T14:25:52.976685+0000 mgr.x (mgr.14150) 396 : cluster [DBG] pgmap v322: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:55.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:54 vm04 bash[19581]: audit 2026-03-09T14:25:54.209842+0000 mon.a (mon.0) 742 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:25:55.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:54 vm04 bash[19581]: audit 2026-03-09T14:25:54.209842+0000 mon.a (mon.0) 742 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:25:55.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:55 vm03 bash[17524]: audit 2026-03-09T14:25:54.569696+0000 mon.a (mon.0) 743 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:25:55.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:55 vm03 bash[17524]: audit 2026-03-09T14:25:54.569696+0000 mon.a (mon.0) 743 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:25:55.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:55 vm03 bash[17524]: audit 2026-03-09T14:25:54.570245+0000 mon.a (mon.0) 744 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:25:55.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:55 vm03 bash[17524]: audit 2026-03-09T14:25:54.570245+0000 mon.a (mon.0) 744 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:25:55.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:55 vm03 bash[17524]: audit 2026-03-09T14:25:54.574573+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:25:55.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:55 vm03 bash[17524]: audit 2026-03-09T14:25:54.574573+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:25:56.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:55 vm04 bash[19581]: audit 2026-03-09T14:25:54.569696+0000 mon.a (mon.0) 743 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:25:56.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:55 vm04 bash[19581]: audit 2026-03-09T14:25:54.569696+0000 mon.a (mon.0) 743 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:25:56.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:55 vm04 bash[19581]: audit 2026-03-09T14:25:54.570245+0000 mon.a (mon.0) 744 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:25:56.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:55 vm04 bash[19581]: audit 2026-03-09T14:25:54.570245+0000 mon.a (mon.0) 744 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:25:56.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:55 vm04 bash[19581]: audit 2026-03-09T14:25:54.574573+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:25:56.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:55 vm04 bash[19581]: audit 2026-03-09T14:25:54.574573+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:25:56.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:55 vm05 bash[20070]: audit 2026-03-09T14:25:54.569696+0000 mon.a (mon.0) 743 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:25:56.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:55 vm05 bash[20070]: audit 2026-03-09T14:25:54.569696+0000 mon.a (mon.0) 743 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:25:56.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:55 vm05 bash[20070]: audit 2026-03-09T14:25:54.570245+0000 mon.a (mon.0) 744 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:25:56.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:55 vm05 bash[20070]: audit 2026-03-09T14:25:54.570245+0000 mon.a (mon.0) 744 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:25:56.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:55 vm05 bash[20070]: audit 2026-03-09T14:25:54.574573+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:25:56.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:55 vm05 bash[20070]: audit 2026-03-09T14:25:54.574573+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:25:57.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:56 vm05 bash[20070]: cluster 2026-03-09T14:25:54.976984+0000 mgr.x (mgr.14150) 397 : cluster [DBG] pgmap v323: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:57.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:56 vm05 bash[20070]: cluster 2026-03-09T14:25:54.976984+0000 mgr.x (mgr.14150) 397 : cluster [DBG] pgmap v323: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:57.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:56 vm04 bash[19581]: cluster 2026-03-09T14:25:54.976984+0000 mgr.x (mgr.14150) 397 : cluster [DBG] pgmap v323: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:57.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:56 vm04 bash[19581]: cluster 2026-03-09T14:25:54.976984+0000 mgr.x (mgr.14150) 397 : cluster [DBG] pgmap v323: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:57.052 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:56 vm03 bash[17524]: cluster 2026-03-09T14:25:54.976984+0000 mgr.x (mgr.14150) 397 : cluster [DBG] pgmap v323: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:57.052 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:56 vm03 bash[17524]: cluster 2026-03-09T14:25:54.976984+0000 mgr.x (mgr.14150) 397 : cluster [DBG] pgmap v323: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:25:58.052 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:25:57 vm03 bash[37744]: debug there is no tcmu-runner data available 2026-03-09T14:25:59.007 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:25:58 vm05 bash[38699]: debug there is no tcmu-runner data available 2026-03-09T14:25:59.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:58 vm05 bash[20070]: cluster 2026-03-09T14:25:56.977234+0000 mgr.x (mgr.14150) 398 : cluster [DBG] pgmap v324: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:59.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:58 vm05 bash[20070]: cluster 2026-03-09T14:25:56.977234+0000 mgr.x (mgr.14150) 398 : cluster [DBG] pgmap v324: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:59.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:58 vm04 bash[19581]: cluster 2026-03-09T14:25:56.977234+0000 mgr.x (mgr.14150) 398 : cluster [DBG] pgmap v324: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:59.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:58 vm04 bash[19581]: cluster 2026-03-09T14:25:56.977234+0000 mgr.x (mgr.14150) 398 : cluster [DBG] pgmap v324: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:59.052 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:58 vm03 bash[17524]: cluster 2026-03-09T14:25:56.977234+0000 mgr.x (mgr.14150) 398 : cluster [DBG] pgmap v324: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:25:59.052 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:58 vm03 bash[17524]: cluster 2026-03-09T14:25:56.977234+0000 mgr.x (mgr.14150) 398 : cluster [DBG] pgmap v324: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:00.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:59 vm05 bash[20070]: audit 2026-03-09T14:25:57.796036+0000 mgr.x (mgr.14150) 399 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:00.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:25:59 vm05 bash[20070]: audit 2026-03-09T14:25:57.796036+0000 mgr.x (mgr.14150) 399 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:00.009 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:59 vm04 bash[19581]: audit 2026-03-09T14:25:57.796036+0000 mgr.x (mgr.14150) 399 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:00.009 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:25:59 vm04 bash[19581]: audit 2026-03-09T14:25:57.796036+0000 mgr.x (mgr.14150) 399 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:00.052 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:59 vm03 bash[17524]: audit 2026-03-09T14:25:57.796036+0000 mgr.x (mgr.14150) 399 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:00.052 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:25:59 vm03 bash[17524]: audit 2026-03-09T14:25:57.796036+0000 mgr.x (mgr.14150) 399 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:01.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:00 vm05 bash[20070]: audit 2026-03-09T14:25:58.769200+0000 mgr.x (mgr.14150) 400 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:01.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:00 vm05 bash[20070]: audit 2026-03-09T14:25:58.769200+0000 mgr.x (mgr.14150) 400 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:01.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:00 vm05 bash[20070]: cluster 2026-03-09T14:25:58.977472+0000 mgr.x (mgr.14150) 401 : cluster [DBG] pgmap v325: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:01.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:00 vm05 bash[20070]: cluster 2026-03-09T14:25:58.977472+0000 mgr.x (mgr.14150) 401 : cluster [DBG] pgmap v325: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:01.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:00 vm04 bash[19581]: audit 2026-03-09T14:25:58.769200+0000 mgr.x (mgr.14150) 400 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:01.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:00 vm04 bash[19581]: audit 2026-03-09T14:25:58.769200+0000 mgr.x (mgr.14150) 400 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:01.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:00 vm04 bash[19581]: cluster 2026-03-09T14:25:58.977472+0000 mgr.x (mgr.14150) 401 : cluster [DBG] pgmap v325: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:01.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:00 vm04 bash[19581]: cluster 2026-03-09T14:25:58.977472+0000 mgr.x (mgr.14150) 401 : cluster [DBG] pgmap v325: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:01.052 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:00 vm03 bash[17524]: audit 2026-03-09T14:25:58.769200+0000 mgr.x (mgr.14150) 400 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:01.052 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:00 vm03 bash[17524]: audit 2026-03-09T14:25:58.769200+0000 mgr.x (mgr.14150) 400 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:01.052 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:00 vm03 bash[17524]: cluster 2026-03-09T14:25:58.977472+0000 mgr.x (mgr.14150) 401 : cluster [DBG] pgmap v325: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:01.052 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:00 vm03 bash[17524]: cluster 2026-03-09T14:25:58.977472+0000 mgr.x (mgr.14150) 401 : cluster [DBG] pgmap v325: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:02.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:01 vm05 bash[20070]: cluster 2026-03-09T14:26:00.977728+0000 mgr.x (mgr.14150) 402 : cluster [DBG] pgmap v326: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:02.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:01 vm05 bash[20070]: cluster 2026-03-09T14:26:00.977728+0000 mgr.x (mgr.14150) 402 : cluster [DBG] pgmap v326: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:02.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:01 vm04 bash[19581]: cluster 2026-03-09T14:26:00.977728+0000 mgr.x (mgr.14150) 402 : cluster [DBG] pgmap v326: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:02.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:01 vm04 bash[19581]: cluster 2026-03-09T14:26:00.977728+0000 mgr.x (mgr.14150) 402 : cluster [DBG] pgmap v326: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:02.052 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:01 vm03 bash[17524]: cluster 2026-03-09T14:26:00.977728+0000 mgr.x (mgr.14150) 402 : cluster [DBG] pgmap v326: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:02.052 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:01 vm03 bash[17524]: cluster 2026-03-09T14:26:00.977728+0000 mgr.x (mgr.14150) 402 : cluster [DBG] pgmap v326: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:04.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:04 vm03 bash[17524]: cluster 2026-03-09T14:26:02.978015+0000 mgr.x (mgr.14150) 403 : cluster [DBG] pgmap v327: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:04.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:04 vm03 bash[17524]: cluster 2026-03-09T14:26:02.978015+0000 mgr.x (mgr.14150) 403 : cluster [DBG] pgmap v327: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:04.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:04 vm05 bash[20070]: cluster 2026-03-09T14:26:02.978015+0000 mgr.x (mgr.14150) 403 : cluster [DBG] pgmap v327: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:04.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:04 vm05 bash[20070]: cluster 2026-03-09T14:26:02.978015+0000 mgr.x (mgr.14150) 403 : cluster [DBG] pgmap v327: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:04.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:04 vm04 bash[19581]: cluster 2026-03-09T14:26:02.978015+0000 mgr.x (mgr.14150) 403 : cluster [DBG] pgmap v327: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:04.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:04 vm04 bash[19581]: cluster 2026-03-09T14:26:02.978015+0000 mgr.x (mgr.14150) 403 : cluster [DBG] pgmap v327: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:06.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:06 vm03 bash[17524]: cluster 2026-03-09T14:26:04.978280+0000 mgr.x (mgr.14150) 404 : cluster [DBG] pgmap v328: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:06.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:06 vm03 bash[17524]: cluster 2026-03-09T14:26:04.978280+0000 mgr.x (mgr.14150) 404 : cluster [DBG] pgmap v328: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:06.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:06 vm05 bash[20070]: cluster 2026-03-09T14:26:04.978280+0000 mgr.x (mgr.14150) 404 : cluster [DBG] pgmap v328: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:06.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:06 vm05 bash[20070]: cluster 2026-03-09T14:26:04.978280+0000 mgr.x (mgr.14150) 404 : cluster [DBG] pgmap v328: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:06.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:06 vm04 bash[19581]: cluster 2026-03-09T14:26:04.978280+0000 mgr.x (mgr.14150) 404 : cluster [DBG] pgmap v328: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:06.507 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:06 vm04 bash[19581]: cluster 2026-03-09T14:26:04.978280+0000 mgr.x (mgr.14150) 404 : cluster [DBG] pgmap v328: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:08.274 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:26:07 vm03 bash[37744]: debug there is no tcmu-runner data available 2026-03-09T14:26:08.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:08 vm03 bash[17524]: cluster 2026-03-09T14:26:06.978570+0000 mgr.x (mgr.14150) 405 : cluster [DBG] pgmap v329: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:08.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:08 vm03 bash[17524]: cluster 2026-03-09T14:26:06.978570+0000 mgr.x (mgr.14150) 405 : cluster [DBG] pgmap v329: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:08.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:08 vm05 bash[20070]: cluster 2026-03-09T14:26:06.978570+0000 mgr.x (mgr.14150) 405 : cluster [DBG] pgmap v329: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:08.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:08 vm05 bash[20070]: cluster 2026-03-09T14:26:06.978570+0000 mgr.x (mgr.14150) 405 : cluster [DBG] pgmap v329: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:08.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:08 vm04 bash[19581]: cluster 2026-03-09T14:26:06.978570+0000 mgr.x (mgr.14150) 405 : cluster [DBG] pgmap v329: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:08.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:08 vm04 bash[19581]: cluster 2026-03-09T14:26:06.978570+0000 mgr.x (mgr.14150) 405 : cluster [DBG] pgmap v329: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:09.257 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:26:08 vm05 bash[38699]: debug there is no tcmu-runner data available 2026-03-09T14:26:09.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:09 vm03 bash[17524]: audit 2026-03-09T14:26:07.806629+0000 mgr.x (mgr.14150) 406 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:09.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:09 vm03 bash[17524]: audit 2026-03-09T14:26:07.806629+0000 mgr.x (mgr.14150) 406 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:09.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:09 vm04 bash[19581]: audit 2026-03-09T14:26:07.806629+0000 mgr.x (mgr.14150) 406 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:09.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:09 vm04 bash[19581]: audit 2026-03-09T14:26:07.806629+0000 mgr.x (mgr.14150) 406 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:09.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:09 vm05 bash[20070]: audit 2026-03-09T14:26:07.806629+0000 mgr.x (mgr.14150) 406 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:09.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:09 vm05 bash[20070]: audit 2026-03-09T14:26:07.806629+0000 mgr.x (mgr.14150) 406 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:10.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:10 vm03 bash[17524]: audit 2026-03-09T14:26:08.779835+0000 mgr.x (mgr.14150) 407 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:10.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:10 vm03 bash[17524]: audit 2026-03-09T14:26:08.779835+0000 mgr.x (mgr.14150) 407 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:10.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:10 vm03 bash[17524]: cluster 2026-03-09T14:26:08.978865+0000 mgr.x (mgr.14150) 408 : cluster [DBG] pgmap v330: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:10.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:10 vm03 bash[17524]: cluster 2026-03-09T14:26:08.978865+0000 mgr.x (mgr.14150) 408 : cluster [DBG] pgmap v330: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:10.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:10 vm04 bash[19581]: audit 2026-03-09T14:26:08.779835+0000 mgr.x (mgr.14150) 407 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:10.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:10 vm04 bash[19581]: audit 2026-03-09T14:26:08.779835+0000 mgr.x (mgr.14150) 407 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:10.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:10 vm04 bash[19581]: cluster 2026-03-09T14:26:08.978865+0000 mgr.x (mgr.14150) 408 : cluster [DBG] pgmap v330: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:10.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:10 vm04 bash[19581]: cluster 2026-03-09T14:26:08.978865+0000 mgr.x (mgr.14150) 408 : cluster [DBG] pgmap v330: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:10.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:10 vm05 bash[20070]: audit 2026-03-09T14:26:08.779835+0000 mgr.x (mgr.14150) 407 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:10.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:10 vm05 bash[20070]: audit 2026-03-09T14:26:08.779835+0000 mgr.x (mgr.14150) 407 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:10.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:10 vm05 bash[20070]: cluster 2026-03-09T14:26:08.978865+0000 mgr.x (mgr.14150) 408 : cluster [DBG] pgmap v330: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:10.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:10 vm05 bash[20070]: cluster 2026-03-09T14:26:08.978865+0000 mgr.x (mgr.14150) 408 : cluster [DBG] pgmap v330: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:12.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:12 vm03 bash[17524]: cluster 2026-03-09T14:26:10.979119+0000 mgr.x (mgr.14150) 409 : cluster [DBG] pgmap v331: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:12.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:12 vm03 bash[17524]: cluster 2026-03-09T14:26:10.979119+0000 mgr.x (mgr.14150) 409 : cluster [DBG] pgmap v331: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:12.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:12 vm05 bash[20070]: cluster 2026-03-09T14:26:10.979119+0000 mgr.x (mgr.14150) 409 : cluster [DBG] pgmap v331: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:12.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:12 vm05 bash[20070]: cluster 2026-03-09T14:26:10.979119+0000 mgr.x (mgr.14150) 409 : cluster [DBG] pgmap v331: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:12.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:12 vm04 bash[19581]: cluster 2026-03-09T14:26:10.979119+0000 mgr.x (mgr.14150) 409 : cluster [DBG] pgmap v331: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:12.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:12 vm04 bash[19581]: cluster 2026-03-09T14:26:10.979119+0000 mgr.x (mgr.14150) 409 : cluster [DBG] pgmap v331: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:14.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:14 vm03 bash[17524]: cluster 2026-03-09T14:26:12.979363+0000 mgr.x (mgr.14150) 410 : cluster [DBG] pgmap v332: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:14.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:14 vm03 bash[17524]: cluster 2026-03-09T14:26:12.979363+0000 mgr.x (mgr.14150) 410 : cluster [DBG] pgmap v332: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:14.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:14 vm05 bash[20070]: cluster 2026-03-09T14:26:12.979363+0000 mgr.x (mgr.14150) 410 : cluster [DBG] pgmap v332: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:14.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:14 vm05 bash[20070]: cluster 2026-03-09T14:26:12.979363+0000 mgr.x (mgr.14150) 410 : cluster [DBG] pgmap v332: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:14.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:14 vm04 bash[19581]: cluster 2026-03-09T14:26:12.979363+0000 mgr.x (mgr.14150) 410 : cluster [DBG] pgmap v332: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:14.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:14 vm04 bash[19581]: cluster 2026-03-09T14:26:12.979363+0000 mgr.x (mgr.14150) 410 : cluster [DBG] pgmap v332: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:16.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:16 vm03 bash[17524]: cluster 2026-03-09T14:26:14.979611+0000 mgr.x (mgr.14150) 411 : cluster [DBG] pgmap v333: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:16.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:16 vm03 bash[17524]: cluster 2026-03-09T14:26:14.979611+0000 mgr.x (mgr.14150) 411 : cluster [DBG] pgmap v333: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:16.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:16 vm05 bash[20070]: cluster 2026-03-09T14:26:14.979611+0000 mgr.x (mgr.14150) 411 : cluster [DBG] pgmap v333: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:16.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:16 vm05 bash[20070]: cluster 2026-03-09T14:26:14.979611+0000 mgr.x (mgr.14150) 411 : cluster [DBG] pgmap v333: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:16.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:16 vm04 bash[19581]: cluster 2026-03-09T14:26:14.979611+0000 mgr.x (mgr.14150) 411 : cluster [DBG] pgmap v333: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:16.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:16 vm04 bash[19581]: cluster 2026-03-09T14:26:14.979611+0000 mgr.x (mgr.14150) 411 : cluster [DBG] pgmap v333: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:18.302 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:26:17 vm03 bash[37744]: debug there is no tcmu-runner data available 2026-03-09T14:26:18.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:18 vm04 bash[19581]: cluster 2026-03-09T14:26:16.979836+0000 mgr.x (mgr.14150) 412 : cluster [DBG] pgmap v334: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:18.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:18 vm04 bash[19581]: cluster 2026-03-09T14:26:16.979836+0000 mgr.x (mgr.14150) 412 : cluster [DBG] pgmap v334: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:18.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:18 vm05 bash[20070]: cluster 2026-03-09T14:26:16.979836+0000 mgr.x (mgr.14150) 412 : cluster [DBG] pgmap v334: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:18.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:18 vm05 bash[20070]: cluster 2026-03-09T14:26:16.979836+0000 mgr.x (mgr.14150) 412 : cluster [DBG] pgmap v334: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:18.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:18 vm03 bash[17524]: cluster 2026-03-09T14:26:16.979836+0000 mgr.x (mgr.14150) 412 : cluster [DBG] pgmap v334: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:18.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:18 vm03 bash[17524]: cluster 2026-03-09T14:26:16.979836+0000 mgr.x (mgr.14150) 412 : cluster [DBG] pgmap v334: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:19.256 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:26:18 vm05 bash[38699]: debug there is no tcmu-runner data available 2026-03-09T14:26:19.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:19 vm05 bash[20070]: audit 2026-03-09T14:26:17.817312+0000 mgr.x (mgr.14150) 413 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:19.757 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:19 vm05 bash[20070]: audit 2026-03-09T14:26:17.817312+0000 mgr.x (mgr.14150) 413 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:19.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:19 vm04 bash[19581]: audit 2026-03-09T14:26:17.817312+0000 mgr.x (mgr.14150) 413 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:19.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:19 vm04 bash[19581]: audit 2026-03-09T14:26:17.817312+0000 mgr.x (mgr.14150) 413 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:19.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:19 vm03 bash[17524]: audit 2026-03-09T14:26:17.817312+0000 mgr.x (mgr.14150) 413 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:19.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:19 vm03 bash[17524]: audit 2026-03-09T14:26:17.817312+0000 mgr.x (mgr.14150) 413 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:20.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:20 vm05 bash[20070]: audit 2026-03-09T14:26:18.785787+0000 mgr.x (mgr.14150) 414 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:20.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:20 vm05 bash[20070]: audit 2026-03-09T14:26:18.785787+0000 mgr.x (mgr.14150) 414 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:20.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:20 vm05 bash[20070]: cluster 2026-03-09T14:26:18.980057+0000 mgr.x (mgr.14150) 415 : cluster [DBG] pgmap v335: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:20.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:20 vm05 bash[20070]: cluster 2026-03-09T14:26:18.980057+0000 mgr.x (mgr.14150) 415 : cluster [DBG] pgmap v335: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:20.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:20 vm04 bash[19581]: audit 2026-03-09T14:26:18.785787+0000 mgr.x (mgr.14150) 414 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:20.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:20 vm04 bash[19581]: audit 2026-03-09T14:26:18.785787+0000 mgr.x (mgr.14150) 414 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:20.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:20 vm04 bash[19581]: cluster 2026-03-09T14:26:18.980057+0000 mgr.x (mgr.14150) 415 : cluster [DBG] pgmap v335: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:20.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:20 vm04 bash[19581]: cluster 2026-03-09T14:26:18.980057+0000 mgr.x (mgr.14150) 415 : cluster [DBG] pgmap v335: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:20.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:20 vm03 bash[17524]: audit 2026-03-09T14:26:18.785787+0000 mgr.x (mgr.14150) 414 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:20.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:20 vm03 bash[17524]: audit 2026-03-09T14:26:18.785787+0000 mgr.x (mgr.14150) 414 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:20.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:20 vm03 bash[17524]: cluster 2026-03-09T14:26:18.980057+0000 mgr.x (mgr.14150) 415 : cluster [DBG] pgmap v335: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:20.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:20 vm03 bash[17524]: cluster 2026-03-09T14:26:18.980057+0000 mgr.x (mgr.14150) 415 : cluster [DBG] pgmap v335: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:22.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:22 vm05 bash[20070]: cluster 2026-03-09T14:26:20.980353+0000 mgr.x (mgr.14150) 416 : cluster [DBG] pgmap v336: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:22.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:22 vm05 bash[20070]: cluster 2026-03-09T14:26:20.980353+0000 mgr.x (mgr.14150) 416 : cluster [DBG] pgmap v336: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:22.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:22 vm04 bash[19581]: cluster 2026-03-09T14:26:20.980353+0000 mgr.x (mgr.14150) 416 : cluster [DBG] pgmap v336: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:22.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:22 vm04 bash[19581]: cluster 2026-03-09T14:26:20.980353+0000 mgr.x (mgr.14150) 416 : cluster [DBG] pgmap v336: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:22.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:22 vm03 bash[17524]: cluster 2026-03-09T14:26:20.980353+0000 mgr.x (mgr.14150) 416 : cluster [DBG] pgmap v336: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:22.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:22 vm03 bash[17524]: cluster 2026-03-09T14:26:20.980353+0000 mgr.x (mgr.14150) 416 : cluster [DBG] pgmap v336: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:24.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:24 vm05 bash[20070]: cluster 2026-03-09T14:26:22.980620+0000 mgr.x (mgr.14150) 417 : cluster [DBG] pgmap v337: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:24.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:24 vm05 bash[20070]: cluster 2026-03-09T14:26:22.980620+0000 mgr.x (mgr.14150) 417 : cluster [DBG] pgmap v337: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:24.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:24 vm04 bash[19581]: cluster 2026-03-09T14:26:22.980620+0000 mgr.x (mgr.14150) 417 : cluster [DBG] pgmap v337: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:24.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:24 vm04 bash[19581]: cluster 2026-03-09T14:26:22.980620+0000 mgr.x (mgr.14150) 417 : cluster [DBG] pgmap v337: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:24.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:24 vm03 bash[17524]: cluster 2026-03-09T14:26:22.980620+0000 mgr.x (mgr.14150) 417 : cluster [DBG] pgmap v337: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:24.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:24 vm03 bash[17524]: cluster 2026-03-09T14:26:22.980620+0000 mgr.x (mgr.14150) 417 : cluster [DBG] pgmap v337: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:26.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:26 vm05 bash[20070]: cluster 2026-03-09T14:26:24.980877+0000 mgr.x (mgr.14150) 418 : cluster [DBG] pgmap v338: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:26.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:26 vm05 bash[20070]: cluster 2026-03-09T14:26:24.980877+0000 mgr.x (mgr.14150) 418 : cluster [DBG] pgmap v338: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:26.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:26 vm04 bash[19581]: cluster 2026-03-09T14:26:24.980877+0000 mgr.x (mgr.14150) 418 : cluster [DBG] pgmap v338: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:26.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:26 vm04 bash[19581]: cluster 2026-03-09T14:26:24.980877+0000 mgr.x (mgr.14150) 418 : cluster [DBG] pgmap v338: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:26.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:26 vm03 bash[17524]: cluster 2026-03-09T14:26:24.980877+0000 mgr.x (mgr.14150) 418 : cluster [DBG] pgmap v338: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:26.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:26 vm03 bash[17524]: cluster 2026-03-09T14:26:24.980877+0000 mgr.x (mgr.14150) 418 : cluster [DBG] pgmap v338: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:28.302 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:26:27 vm03 bash[37744]: debug there is no tcmu-runner data available 2026-03-09T14:26:28.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:28 vm04 bash[19581]: cluster 2026-03-09T14:26:26.981112+0000 mgr.x (mgr.14150) 419 : cluster [DBG] pgmap v339: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:28.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:28 vm04 bash[19581]: cluster 2026-03-09T14:26:26.981112+0000 mgr.x (mgr.14150) 419 : cluster [DBG] pgmap v339: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:28.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:28 vm05 bash[20070]: cluster 2026-03-09T14:26:26.981112+0000 mgr.x (mgr.14150) 419 : cluster [DBG] pgmap v339: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:28.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:28 vm05 bash[20070]: cluster 2026-03-09T14:26:26.981112+0000 mgr.x (mgr.14150) 419 : cluster [DBG] pgmap v339: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:28.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:28 vm03 bash[17524]: cluster 2026-03-09T14:26:26.981112+0000 mgr.x (mgr.14150) 419 : cluster [DBG] pgmap v339: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:28.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:28 vm03 bash[17524]: cluster 2026-03-09T14:26:26.981112+0000 mgr.x (mgr.14150) 419 : cluster [DBG] pgmap v339: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:29.256 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:26:28 vm05 bash[38699]: debug there is no tcmu-runner data available 2026-03-09T14:26:29.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:29 vm05 bash[20070]: audit 2026-03-09T14:26:27.827840+0000 mgr.x (mgr.14150) 420 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:29.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:29 vm05 bash[20070]: audit 2026-03-09T14:26:27.827840+0000 mgr.x (mgr.14150) 420 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:29.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:29 vm04 bash[19581]: audit 2026-03-09T14:26:27.827840+0000 mgr.x (mgr.14150) 420 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:29.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:29 vm04 bash[19581]: audit 2026-03-09T14:26:27.827840+0000 mgr.x (mgr.14150) 420 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:29.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:29 vm03 bash[17524]: audit 2026-03-09T14:26:27.827840+0000 mgr.x (mgr.14150) 420 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:29.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:29 vm03 bash[17524]: audit 2026-03-09T14:26:27.827840+0000 mgr.x (mgr.14150) 420 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:30.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:30 vm05 bash[20070]: audit 2026-03-09T14:26:28.789094+0000 mgr.x (mgr.14150) 421 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:30.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:30 vm05 bash[20070]: audit 2026-03-09T14:26:28.789094+0000 mgr.x (mgr.14150) 421 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:30.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:30 vm05 bash[20070]: cluster 2026-03-09T14:26:28.981346+0000 mgr.x (mgr.14150) 422 : cluster [DBG] pgmap v340: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:30.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:30 vm05 bash[20070]: cluster 2026-03-09T14:26:28.981346+0000 mgr.x (mgr.14150) 422 : cluster [DBG] pgmap v340: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:30.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:30 vm04 bash[19581]: audit 2026-03-09T14:26:28.789094+0000 mgr.x (mgr.14150) 421 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:30.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:30 vm04 bash[19581]: audit 2026-03-09T14:26:28.789094+0000 mgr.x (mgr.14150) 421 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:30.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:30 vm04 bash[19581]: cluster 2026-03-09T14:26:28.981346+0000 mgr.x (mgr.14150) 422 : cluster [DBG] pgmap v340: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:30.757 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:30 vm04 bash[19581]: cluster 2026-03-09T14:26:28.981346+0000 mgr.x (mgr.14150) 422 : cluster [DBG] pgmap v340: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:30.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:30 vm03 bash[17524]: audit 2026-03-09T14:26:28.789094+0000 mgr.x (mgr.14150) 421 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:30.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:30 vm03 bash[17524]: audit 2026-03-09T14:26:28.789094+0000 mgr.x (mgr.14150) 421 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:30.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:30 vm03 bash[17524]: cluster 2026-03-09T14:26:28.981346+0000 mgr.x (mgr.14150) 422 : cluster [DBG] pgmap v340: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:30.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:30 vm03 bash[17524]: cluster 2026-03-09T14:26:28.981346+0000 mgr.x (mgr.14150) 422 : cluster [DBG] pgmap v340: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:32.875 INFO:teuthology.orchestra.run.vm04.stderr:Note: switching to '569c3e99c9b32a51b4eaf08731c728f4513ed589'. 2026-03-09T14:26:32.875 INFO:teuthology.orchestra.run.vm04.stderr: 2026-03-09T14:26:32.875 INFO:teuthology.orchestra.run.vm04.stderr:You are in 'detached HEAD' state. You can look around, make experimental 2026-03-09T14:26:32.875 INFO:teuthology.orchestra.run.vm04.stderr:changes and commit them, and you can discard any commits you make in this 2026-03-09T14:26:32.876 INFO:teuthology.orchestra.run.vm04.stderr:state without impacting any branches by switching back to a branch. 2026-03-09T14:26:32.876 INFO:teuthology.orchestra.run.vm04.stderr: 2026-03-09T14:26:32.876 INFO:teuthology.orchestra.run.vm04.stderr:If you want to create a new branch to retain commits you create, you may 2026-03-09T14:26:32.876 INFO:teuthology.orchestra.run.vm04.stderr:do so (now or later) by using -c with the switch command. Example: 2026-03-09T14:26:32.876 INFO:teuthology.orchestra.run.vm04.stderr: 2026-03-09T14:26:32.876 INFO:teuthology.orchestra.run.vm04.stderr: git switch -c 2026-03-09T14:26:32.876 INFO:teuthology.orchestra.run.vm04.stderr: 2026-03-09T14:26:32.876 INFO:teuthology.orchestra.run.vm04.stderr:Or undo this operation with: 2026-03-09T14:26:32.876 INFO:teuthology.orchestra.run.vm04.stderr: 2026-03-09T14:26:32.876 INFO:teuthology.orchestra.run.vm04.stderr: git switch - 2026-03-09T14:26:32.876 INFO:teuthology.orchestra.run.vm04.stderr: 2026-03-09T14:26:32.876 INFO:teuthology.orchestra.run.vm04.stderr:Turn off this advice by setting config variable advice.detachedHead to false 2026-03-09T14:26:32.876 INFO:teuthology.orchestra.run.vm04.stderr: 2026-03-09T14:26:32.876 INFO:teuthology.orchestra.run.vm04.stderr:HEAD is now at 569c3e99c9b qa/rgw: bucket notifications use pynose 2026-03-09T14:26:32.882 DEBUG:teuthology.orchestra.run.vm04:> cp -- /home/ubuntu/cephtest/clone.client.1/src/test/cli-integration/rbd/iscsi_client.t /home/ubuntu/cephtest/archive/cram.client.1 2026-03-09T14:26:32.886 DEBUG:teuthology.orchestra.run.vm05:> mkdir -- /home/ubuntu/cephtest/archive/cram.client.2 && python3 -m venv /home/ubuntu/cephtest/virtualenv && /home/ubuntu/cephtest/virtualenv/bin/pip install cram==0.6 2026-03-09T14:26:33.006 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:32 vm05 bash[20070]: cluster 2026-03-09T14:26:30.981583+0000 mgr.x (mgr.14150) 423 : cluster [DBG] pgmap v341: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:33.006 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:32 vm05 bash[20070]: cluster 2026-03-09T14:26:30.981583+0000 mgr.x (mgr.14150) 423 : cluster [DBG] pgmap v341: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:33.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:32 vm04 bash[19581]: cluster 2026-03-09T14:26:30.981583+0000 mgr.x (mgr.14150) 423 : cluster [DBG] pgmap v341: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:33.007 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:32 vm04 bash[19581]: cluster 2026-03-09T14:26:30.981583+0000 mgr.x (mgr.14150) 423 : cluster [DBG] pgmap v341: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:33.052 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:32 vm03 bash[17524]: cluster 2026-03-09T14:26:30.981583+0000 mgr.x (mgr.14150) 423 : cluster [DBG] pgmap v341: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:33.052 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:32 vm03 bash[17524]: cluster 2026-03-09T14:26:30.981583+0000 mgr.x (mgr.14150) 423 : cluster [DBG] pgmap v341: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:34.590 INFO:teuthology.orchestra.run.vm05.stdout:Collecting cram==0.6 2026-03-09T14:26:34.632 INFO:teuthology.orchestra.run.vm05.stdout: Downloading cram-0.6-py2.py3-none-any.whl (17 kB) 2026-03-09T14:26:34.646 INFO:teuthology.orchestra.run.vm05.stdout:Installing collected packages: cram 2026-03-09T14:26:34.651 INFO:teuthology.orchestra.run.vm05.stdout:Successfully installed cram-0.6 2026-03-09T14:26:34.681 DEBUG:teuthology.orchestra.run.vm05:> rm -rf /home/ubuntu/cephtest/clone.client.2 && git clone https://github.com/kshtsk/ceph.git /home/ubuntu/cephtest/clone.client.2 && cd /home/ubuntu/cephtest/clone.client.2 && git checkout 569c3e99c9b32a51b4eaf08731c728f4513ed589 2026-03-09T14:26:34.684 INFO:teuthology.orchestra.run.vm05.stderr:Cloning into '/home/ubuntu/cephtest/clone.client.2'... 2026-03-09T14:26:35.006 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:34 vm04 bash[19581]: cluster 2026-03-09T14:26:32.981818+0000 mgr.x (mgr.14150) 424 : cluster [DBG] pgmap v342: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:35.006 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:34 vm04 bash[19581]: cluster 2026-03-09T14:26:32.981818+0000 mgr.x (mgr.14150) 424 : cluster [DBG] pgmap v342: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:35.006 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:34 vm05 bash[20070]: cluster 2026-03-09T14:26:32.981818+0000 mgr.x (mgr.14150) 424 : cluster [DBG] pgmap v342: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:35.006 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:34 vm05 bash[20070]: cluster 2026-03-09T14:26:32.981818+0000 mgr.x (mgr.14150) 424 : cluster [DBG] pgmap v342: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:35.052 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:34 vm03 bash[17524]: cluster 2026-03-09T14:26:32.981818+0000 mgr.x (mgr.14150) 424 : cluster [DBG] pgmap v342: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:35.052 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:34 vm03 bash[17524]: cluster 2026-03-09T14:26:32.981818+0000 mgr.x (mgr.14150) 424 : cluster [DBG] pgmap v342: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:36.006 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:35 vm04 bash[19581]: cluster 2026-03-09T14:26:34.982052+0000 mgr.x (mgr.14150) 425 : cluster [DBG] pgmap v343: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:36.006 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:35 vm04 bash[19581]: cluster 2026-03-09T14:26:34.982052+0000 mgr.x (mgr.14150) 425 : cluster [DBG] pgmap v343: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:36.006 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:35 vm05 bash[20070]: cluster 2026-03-09T14:26:34.982052+0000 mgr.x (mgr.14150) 425 : cluster [DBG] pgmap v343: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:36.006 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:35 vm05 bash[20070]: cluster 2026-03-09T14:26:34.982052+0000 mgr.x (mgr.14150) 425 : cluster [DBG] pgmap v343: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:36.052 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:35 vm03 bash[17524]: cluster 2026-03-09T14:26:34.982052+0000 mgr.x (mgr.14150) 425 : cluster [DBG] pgmap v343: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:36.052 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:35 vm03 bash[17524]: cluster 2026-03-09T14:26:34.982052+0000 mgr.x (mgr.14150) 425 : cluster [DBG] pgmap v343: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:38.302 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:26:37 vm03 bash[37744]: debug there is no tcmu-runner data available 2026-03-09T14:26:38.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:38 vm03 bash[17524]: cluster 2026-03-09T14:26:36.982357+0000 mgr.x (mgr.14150) 426 : cluster [DBG] pgmap v344: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:38.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:38 vm03 bash[17524]: cluster 2026-03-09T14:26:36.982357+0000 mgr.x (mgr.14150) 426 : cluster [DBG] pgmap v344: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:38.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:38 vm05 bash[20070]: cluster 2026-03-09T14:26:36.982357+0000 mgr.x (mgr.14150) 426 : cluster [DBG] pgmap v344: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:38.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:38 vm05 bash[20070]: cluster 2026-03-09T14:26:36.982357+0000 mgr.x (mgr.14150) 426 : cluster [DBG] pgmap v344: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:38.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:38 vm04 bash[19581]: cluster 2026-03-09T14:26:36.982357+0000 mgr.x (mgr.14150) 426 : cluster [DBG] pgmap v344: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:38.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:38 vm04 bash[19581]: cluster 2026-03-09T14:26:36.982357+0000 mgr.x (mgr.14150) 426 : cluster [DBG] pgmap v344: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:39.085 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:26:38 vm05 bash[38699]: debug there is no tcmu-runner data available 2026-03-09T14:26:39.085 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:39 vm05 bash[20070]: audit 2026-03-09T14:26:37.833977+0000 mgr.x (mgr.14150) 427 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:39.085 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:39 vm05 bash[20070]: audit 2026-03-09T14:26:37.833977+0000 mgr.x (mgr.14150) 427 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:39.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:39 vm04 bash[19581]: audit 2026-03-09T14:26:37.833977+0000 mgr.x (mgr.14150) 427 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:39.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:39 vm04 bash[19581]: audit 2026-03-09T14:26:37.833977+0000 mgr.x (mgr.14150) 427 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:39.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:39 vm03 bash[17524]: audit 2026-03-09T14:26:37.833977+0000 mgr.x (mgr.14150) 427 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:39.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:39 vm03 bash[17524]: audit 2026-03-09T14:26:37.833977+0000 mgr.x (mgr.14150) 427 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:40.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:40 vm04 bash[19581]: audit 2026-03-09T14:26:38.799204+0000 mgr.x (mgr.14150) 428 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:40.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:40 vm04 bash[19581]: audit 2026-03-09T14:26:38.799204+0000 mgr.x (mgr.14150) 428 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:40.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:40 vm04 bash[19581]: cluster 2026-03-09T14:26:38.982631+0000 mgr.x (mgr.14150) 429 : cluster [DBG] pgmap v345: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:40.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:40 vm04 bash[19581]: cluster 2026-03-09T14:26:38.982631+0000 mgr.x (mgr.14150) 429 : cluster [DBG] pgmap v345: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:40.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:40 vm05 bash[20070]: audit 2026-03-09T14:26:38.799204+0000 mgr.x (mgr.14150) 428 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:40.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:40 vm05 bash[20070]: audit 2026-03-09T14:26:38.799204+0000 mgr.x (mgr.14150) 428 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:40.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:40 vm05 bash[20070]: cluster 2026-03-09T14:26:38.982631+0000 mgr.x (mgr.14150) 429 : cluster [DBG] pgmap v345: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:40.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:40 vm05 bash[20070]: cluster 2026-03-09T14:26:38.982631+0000 mgr.x (mgr.14150) 429 : cluster [DBG] pgmap v345: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:40.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:40 vm03 bash[17524]: audit 2026-03-09T14:26:38.799204+0000 mgr.x (mgr.14150) 428 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:40.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:40 vm03 bash[17524]: audit 2026-03-09T14:26:38.799204+0000 mgr.x (mgr.14150) 428 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:40.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:40 vm03 bash[17524]: cluster 2026-03-09T14:26:38.982631+0000 mgr.x (mgr.14150) 429 : cluster [DBG] pgmap v345: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:40.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:40 vm03 bash[17524]: cluster 2026-03-09T14:26:38.982631+0000 mgr.x (mgr.14150) 429 : cluster [DBG] pgmap v345: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:42.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:42 vm05 bash[20070]: cluster 2026-03-09T14:26:40.982883+0000 mgr.x (mgr.14150) 430 : cluster [DBG] pgmap v346: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:42.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:42 vm05 bash[20070]: cluster 2026-03-09T14:26:40.982883+0000 mgr.x (mgr.14150) 430 : cluster [DBG] pgmap v346: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:42.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:42 vm04 bash[19581]: cluster 2026-03-09T14:26:40.982883+0000 mgr.x (mgr.14150) 430 : cluster [DBG] pgmap v346: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:42.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:42 vm04 bash[19581]: cluster 2026-03-09T14:26:40.982883+0000 mgr.x (mgr.14150) 430 : cluster [DBG] pgmap v346: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:42.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:42 vm03 bash[17524]: cluster 2026-03-09T14:26:40.982883+0000 mgr.x (mgr.14150) 430 : cluster [DBG] pgmap v346: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:42.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:42 vm03 bash[17524]: cluster 2026-03-09T14:26:40.982883+0000 mgr.x (mgr.14150) 430 : cluster [DBG] pgmap v346: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:44.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:44 vm03 bash[17524]: cluster 2026-03-09T14:26:42.983113+0000 mgr.x (mgr.14150) 431 : cluster [DBG] pgmap v347: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:44.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:44 vm03 bash[17524]: cluster 2026-03-09T14:26:42.983113+0000 mgr.x (mgr.14150) 431 : cluster [DBG] pgmap v347: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:44.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:44 vm05 bash[20070]: cluster 2026-03-09T14:26:42.983113+0000 mgr.x (mgr.14150) 431 : cluster [DBG] pgmap v347: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:44.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:44 vm05 bash[20070]: cluster 2026-03-09T14:26:42.983113+0000 mgr.x (mgr.14150) 431 : cluster [DBG] pgmap v347: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:44.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:44 vm04 bash[19581]: cluster 2026-03-09T14:26:42.983113+0000 mgr.x (mgr.14150) 431 : cluster [DBG] pgmap v347: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:44.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:44 vm04 bash[19581]: cluster 2026-03-09T14:26:42.983113+0000 mgr.x (mgr.14150) 431 : cluster [DBG] pgmap v347: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:46.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:46 vm03 bash[17524]: cluster 2026-03-09T14:26:44.983360+0000 mgr.x (mgr.14150) 432 : cluster [DBG] pgmap v348: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:46.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:46 vm03 bash[17524]: cluster 2026-03-09T14:26:44.983360+0000 mgr.x (mgr.14150) 432 : cluster [DBG] pgmap v348: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:46.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:46 vm04 bash[19581]: cluster 2026-03-09T14:26:44.983360+0000 mgr.x (mgr.14150) 432 : cluster [DBG] pgmap v348: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:46.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:46 vm04 bash[19581]: cluster 2026-03-09T14:26:44.983360+0000 mgr.x (mgr.14150) 432 : cluster [DBG] pgmap v348: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:46.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:46 vm05 bash[20070]: cluster 2026-03-09T14:26:44.983360+0000 mgr.x (mgr.14150) 432 : cluster [DBG] pgmap v348: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:46.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:46 vm05 bash[20070]: cluster 2026-03-09T14:26:44.983360+0000 mgr.x (mgr.14150) 432 : cluster [DBG] pgmap v348: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:48.264 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:26:47 vm03 bash[37744]: debug there is no tcmu-runner data available 2026-03-09T14:26:48.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:48 vm03 bash[17524]: cluster 2026-03-09T14:26:46.983636+0000 mgr.x (mgr.14150) 433 : cluster [DBG] pgmap v349: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:48.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:48 vm03 bash[17524]: cluster 2026-03-09T14:26:46.983636+0000 mgr.x (mgr.14150) 433 : cluster [DBG] pgmap v349: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:48.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:48 vm05 bash[20070]: cluster 2026-03-09T14:26:46.983636+0000 mgr.x (mgr.14150) 433 : cluster [DBG] pgmap v349: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:48.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:48 vm05 bash[20070]: cluster 2026-03-09T14:26:46.983636+0000 mgr.x (mgr.14150) 433 : cluster [DBG] pgmap v349: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:48.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:48 vm04 bash[19581]: cluster 2026-03-09T14:26:46.983636+0000 mgr.x (mgr.14150) 433 : cluster [DBG] pgmap v349: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:48.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:48 vm04 bash[19581]: cluster 2026-03-09T14:26:46.983636+0000 mgr.x (mgr.14150) 433 : cluster [DBG] pgmap v349: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:49.256 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:26:48 vm05 bash[38699]: debug there is no tcmu-runner data available 2026-03-09T14:26:49.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:49 vm03 bash[17524]: audit 2026-03-09T14:26:47.837185+0000 mgr.x (mgr.14150) 434 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:49.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:49 vm03 bash[17524]: audit 2026-03-09T14:26:47.837185+0000 mgr.x (mgr.14150) 434 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:49.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:49 vm05 bash[20070]: audit 2026-03-09T14:26:47.837185+0000 mgr.x (mgr.14150) 434 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:49.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:49 vm05 bash[20070]: audit 2026-03-09T14:26:47.837185+0000 mgr.x (mgr.14150) 434 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:49.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:49 vm04 bash[19581]: audit 2026-03-09T14:26:47.837185+0000 mgr.x (mgr.14150) 434 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:49.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:49 vm04 bash[19581]: audit 2026-03-09T14:26:47.837185+0000 mgr.x (mgr.14150) 434 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:50.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:50 vm03 bash[17524]: audit 2026-03-09T14:26:48.809778+0000 mgr.x (mgr.14150) 435 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:50.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:50 vm03 bash[17524]: audit 2026-03-09T14:26:48.809778+0000 mgr.x (mgr.14150) 435 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:50.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:50 vm03 bash[17524]: cluster 2026-03-09T14:26:48.983937+0000 mgr.x (mgr.14150) 436 : cluster [DBG] pgmap v350: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:50.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:50 vm03 bash[17524]: cluster 2026-03-09T14:26:48.983937+0000 mgr.x (mgr.14150) 436 : cluster [DBG] pgmap v350: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:50.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:50 vm05 bash[20070]: audit 2026-03-09T14:26:48.809778+0000 mgr.x (mgr.14150) 435 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:50.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:50 vm05 bash[20070]: audit 2026-03-09T14:26:48.809778+0000 mgr.x (mgr.14150) 435 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:50.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:50 vm05 bash[20070]: cluster 2026-03-09T14:26:48.983937+0000 mgr.x (mgr.14150) 436 : cluster [DBG] pgmap v350: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:50.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:50 vm05 bash[20070]: cluster 2026-03-09T14:26:48.983937+0000 mgr.x (mgr.14150) 436 : cluster [DBG] pgmap v350: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:50.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:50 vm04 bash[19581]: audit 2026-03-09T14:26:48.809778+0000 mgr.x (mgr.14150) 435 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:50.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:50 vm04 bash[19581]: audit 2026-03-09T14:26:48.809778+0000 mgr.x (mgr.14150) 435 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:50.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:50 vm04 bash[19581]: cluster 2026-03-09T14:26:48.983937+0000 mgr.x (mgr.14150) 436 : cluster [DBG] pgmap v350: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:50.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:50 vm04 bash[19581]: cluster 2026-03-09T14:26:48.983937+0000 mgr.x (mgr.14150) 436 : cluster [DBG] pgmap v350: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:52.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:52 vm05 bash[20070]: cluster 2026-03-09T14:26:50.984245+0000 mgr.x (mgr.14150) 437 : cluster [DBG] pgmap v351: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:52.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:52 vm05 bash[20070]: cluster 2026-03-09T14:26:50.984245+0000 mgr.x (mgr.14150) 437 : cluster [DBG] pgmap v351: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:52.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:52 vm04 bash[19581]: cluster 2026-03-09T14:26:50.984245+0000 mgr.x (mgr.14150) 437 : cluster [DBG] pgmap v351: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:52.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:52 vm04 bash[19581]: cluster 2026-03-09T14:26:50.984245+0000 mgr.x (mgr.14150) 437 : cluster [DBG] pgmap v351: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:52.801 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:52 vm03 bash[17524]: cluster 2026-03-09T14:26:50.984245+0000 mgr.x (mgr.14150) 437 : cluster [DBG] pgmap v351: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:52.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:52 vm03 bash[17524]: cluster 2026-03-09T14:26:50.984245+0000 mgr.x (mgr.14150) 437 : cluster [DBG] pgmap v351: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:54.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:54 vm05 bash[20070]: cluster 2026-03-09T14:26:52.984508+0000 mgr.x (mgr.14150) 438 : cluster [DBG] pgmap v352: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:54.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:54 vm05 bash[20070]: cluster 2026-03-09T14:26:52.984508+0000 mgr.x (mgr.14150) 438 : cluster [DBG] pgmap v352: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:54.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:54 vm04 bash[19581]: cluster 2026-03-09T14:26:52.984508+0000 mgr.x (mgr.14150) 438 : cluster [DBG] pgmap v352: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:54.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:54 vm04 bash[19581]: cluster 2026-03-09T14:26:52.984508+0000 mgr.x (mgr.14150) 438 : cluster [DBG] pgmap v352: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:54.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:54 vm03 bash[17524]: cluster 2026-03-09T14:26:52.984508+0000 mgr.x (mgr.14150) 438 : cluster [DBG] pgmap v352: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:54.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:54 vm03 bash[17524]: cluster 2026-03-09T14:26:52.984508+0000 mgr.x (mgr.14150) 438 : cluster [DBG] pgmap v352: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:55.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:55 vm05 bash[20070]: audit 2026-03-09T14:26:54.592711+0000 mon.a (mon.0) 746 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:26:55.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:55 vm05 bash[20070]: audit 2026-03-09T14:26:54.592711+0000 mon.a (mon.0) 746 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:26:55.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:55 vm05 bash[20070]: audit 2026-03-09T14:26:54.927011+0000 mon.a (mon.0) 747 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:26:55.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:55 vm05 bash[20070]: audit 2026-03-09T14:26:54.927011+0000 mon.a (mon.0) 747 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:26:55.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:55 vm05 bash[20070]: audit 2026-03-09T14:26:54.927751+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:26:55.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:55 vm05 bash[20070]: audit 2026-03-09T14:26:54.927751+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:26:55.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:55 vm05 bash[20070]: audit 2026-03-09T14:26:54.932782+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:26:55.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:55 vm05 bash[20070]: audit 2026-03-09T14:26:54.932782+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:26:55.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:55 vm04 bash[19581]: audit 2026-03-09T14:26:54.592711+0000 mon.a (mon.0) 746 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:26:55.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:55 vm04 bash[19581]: audit 2026-03-09T14:26:54.592711+0000 mon.a (mon.0) 746 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:26:55.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:55 vm04 bash[19581]: audit 2026-03-09T14:26:54.927011+0000 mon.a (mon.0) 747 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:26:55.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:55 vm04 bash[19581]: audit 2026-03-09T14:26:54.927011+0000 mon.a (mon.0) 747 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:26:55.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:55 vm04 bash[19581]: audit 2026-03-09T14:26:54.927751+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:26:55.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:55 vm04 bash[19581]: audit 2026-03-09T14:26:54.927751+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:26:55.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:55 vm04 bash[19581]: audit 2026-03-09T14:26:54.932782+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:26:55.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:55 vm04 bash[19581]: audit 2026-03-09T14:26:54.932782+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:26:55.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:55 vm03 bash[17524]: audit 2026-03-09T14:26:54.592711+0000 mon.a (mon.0) 746 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:26:55.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:55 vm03 bash[17524]: audit 2026-03-09T14:26:54.592711+0000 mon.a (mon.0) 746 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:26:55.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:55 vm03 bash[17524]: audit 2026-03-09T14:26:54.927011+0000 mon.a (mon.0) 747 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:26:55.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:55 vm03 bash[17524]: audit 2026-03-09T14:26:54.927011+0000 mon.a (mon.0) 747 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:26:55.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:55 vm03 bash[17524]: audit 2026-03-09T14:26:54.927751+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:26:55.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:55 vm03 bash[17524]: audit 2026-03-09T14:26:54.927751+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:26:55.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:55 vm03 bash[17524]: audit 2026-03-09T14:26:54.932782+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:26:55.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:55 vm03 bash[17524]: audit 2026-03-09T14:26:54.932782+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:26:56.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:56 vm03 bash[17524]: cluster 2026-03-09T14:26:54.984753+0000 mgr.x (mgr.14150) 439 : cluster [DBG] pgmap v353: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:56.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:56 vm03 bash[17524]: cluster 2026-03-09T14:26:54.984753+0000 mgr.x (mgr.14150) 439 : cluster [DBG] pgmap v353: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:57.006 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:56 vm05 bash[20070]: cluster 2026-03-09T14:26:54.984753+0000 mgr.x (mgr.14150) 439 : cluster [DBG] pgmap v353: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:57.006 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:56 vm05 bash[20070]: cluster 2026-03-09T14:26:54.984753+0000 mgr.x (mgr.14150) 439 : cluster [DBG] pgmap v353: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:57.006 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:56 vm04 bash[19581]: cluster 2026-03-09T14:26:54.984753+0000 mgr.x (mgr.14150) 439 : cluster [DBG] pgmap v353: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:57.006 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:56 vm04 bash[19581]: cluster 2026-03-09T14:26:54.984753+0000 mgr.x (mgr.14150) 439 : cluster [DBG] pgmap v353: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:26:58.006 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:57 vm05 bash[20070]: cluster 2026-03-09T14:26:56.985061+0000 mgr.x (mgr.14150) 440 : cluster [DBG] pgmap v354: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:58.006 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:57 vm05 bash[20070]: cluster 2026-03-09T14:26:56.985061+0000 mgr.x (mgr.14150) 440 : cluster [DBG] pgmap v354: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:58.006 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:57 vm04 bash[19581]: cluster 2026-03-09T14:26:56.985061+0000 mgr.x (mgr.14150) 440 : cluster [DBG] pgmap v354: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:58.006 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:57 vm04 bash[19581]: cluster 2026-03-09T14:26:56.985061+0000 mgr.x (mgr.14150) 440 : cluster [DBG] pgmap v354: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:58.051 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:26:57 vm03 bash[37744]: debug there is no tcmu-runner data available 2026-03-09T14:26:58.051 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:57 vm03 bash[17524]: cluster 2026-03-09T14:26:56.985061+0000 mgr.x (mgr.14150) 440 : cluster [DBG] pgmap v354: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:58.051 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:57 vm03 bash[17524]: cluster 2026-03-09T14:26:56.985061+0000 mgr.x (mgr.14150) 440 : cluster [DBG] pgmap v354: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:26:59.006 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:58 vm04 bash[19581]: audit 2026-03-09T14:26:57.846017+0000 mgr.x (mgr.14150) 441 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:59.006 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:58 vm04 bash[19581]: audit 2026-03-09T14:26:57.846017+0000 mgr.x (mgr.14150) 441 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:59.006 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:58 vm05 bash[20070]: audit 2026-03-09T14:26:57.846017+0000 mgr.x (mgr.14150) 441 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:59.006 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:58 vm05 bash[20070]: audit 2026-03-09T14:26:57.846017+0000 mgr.x (mgr.14150) 441 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:59.006 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:26:58 vm05 bash[38699]: debug there is no tcmu-runner data available 2026-03-09T14:26:59.051 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:58 vm03 bash[17524]: audit 2026-03-09T14:26:57.846017+0000 mgr.x (mgr.14150) 441 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:26:59.051 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:58 vm03 bash[17524]: audit 2026-03-09T14:26:57.846017+0000 mgr.x (mgr.14150) 441 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:00.006 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:59 vm05 bash[20070]: audit 2026-03-09T14:26:58.820311+0000 mgr.x (mgr.14150) 442 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:00.006 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:59 vm05 bash[20070]: audit 2026-03-09T14:26:58.820311+0000 mgr.x (mgr.14150) 442 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:00.006 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:59 vm05 bash[20070]: cluster 2026-03-09T14:26:58.985280+0000 mgr.x (mgr.14150) 443 : cluster [DBG] pgmap v355: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:00.006 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:26:59 vm05 bash[20070]: cluster 2026-03-09T14:26:58.985280+0000 mgr.x (mgr.14150) 443 : cluster [DBG] pgmap v355: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:00.006 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:59 vm04 bash[19581]: audit 2026-03-09T14:26:58.820311+0000 mgr.x (mgr.14150) 442 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:00.006 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:59 vm04 bash[19581]: audit 2026-03-09T14:26:58.820311+0000 mgr.x (mgr.14150) 442 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:00.006 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:59 vm04 bash[19581]: cluster 2026-03-09T14:26:58.985280+0000 mgr.x (mgr.14150) 443 : cluster [DBG] pgmap v355: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:00.006 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:26:59 vm04 bash[19581]: cluster 2026-03-09T14:26:58.985280+0000 mgr.x (mgr.14150) 443 : cluster [DBG] pgmap v355: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:00.052 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:59 vm03 bash[17524]: audit 2026-03-09T14:26:58.820311+0000 mgr.x (mgr.14150) 442 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:00.052 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:59 vm03 bash[17524]: audit 2026-03-09T14:26:58.820311+0000 mgr.x (mgr.14150) 442 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:00.052 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:59 vm03 bash[17524]: cluster 2026-03-09T14:26:58.985280+0000 mgr.x (mgr.14150) 443 : cluster [DBG] pgmap v355: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:00.052 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:26:59 vm03 bash[17524]: cluster 2026-03-09T14:26:58.985280+0000 mgr.x (mgr.14150) 443 : cluster [DBG] pgmap v355: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:02.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:02 vm03 bash[17524]: cluster 2026-03-09T14:27:00.985535+0000 mgr.x (mgr.14150) 444 : cluster [DBG] pgmap v356: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:02.302 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:02 vm03 bash[17524]: cluster 2026-03-09T14:27:00.985535+0000 mgr.x (mgr.14150) 444 : cluster [DBG] pgmap v356: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:02.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:02 vm05 bash[20070]: cluster 2026-03-09T14:27:00.985535+0000 mgr.x (mgr.14150) 444 : cluster [DBG] pgmap v356: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:02.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:02 vm05 bash[20070]: cluster 2026-03-09T14:27:00.985535+0000 mgr.x (mgr.14150) 444 : cluster [DBG] pgmap v356: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:02.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:02 vm04 bash[19581]: cluster 2026-03-09T14:27:00.985535+0000 mgr.x (mgr.14150) 444 : cluster [DBG] pgmap v356: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:02.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:02 vm04 bash[19581]: cluster 2026-03-09T14:27:00.985535+0000 mgr.x (mgr.14150) 444 : cluster [DBG] pgmap v356: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:04.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:04 vm05 bash[20070]: cluster 2026-03-09T14:27:02.985897+0000 mgr.x (mgr.14150) 445 : cluster [DBG] pgmap v357: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:04.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:04 vm05 bash[20070]: cluster 2026-03-09T14:27:02.985897+0000 mgr.x (mgr.14150) 445 : cluster [DBG] pgmap v357: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:04.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:04 vm04 bash[19581]: cluster 2026-03-09T14:27:02.985897+0000 mgr.x (mgr.14150) 445 : cluster [DBG] pgmap v357: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:04.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:04 vm04 bash[19581]: cluster 2026-03-09T14:27:02.985897+0000 mgr.x (mgr.14150) 445 : cluster [DBG] pgmap v357: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:04.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:04 vm03 bash[17524]: cluster 2026-03-09T14:27:02.985897+0000 mgr.x (mgr.14150) 445 : cluster [DBG] pgmap v357: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:04.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:04 vm03 bash[17524]: cluster 2026-03-09T14:27:02.985897+0000 mgr.x (mgr.14150) 445 : cluster [DBG] pgmap v357: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:06.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:06 vm05 bash[20070]: cluster 2026-03-09T14:27:04.986205+0000 mgr.x (mgr.14150) 446 : cluster [DBG] pgmap v358: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:06.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:06 vm05 bash[20070]: cluster 2026-03-09T14:27:04.986205+0000 mgr.x (mgr.14150) 446 : cluster [DBG] pgmap v358: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:06.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:06 vm04 bash[19581]: cluster 2026-03-09T14:27:04.986205+0000 mgr.x (mgr.14150) 446 : cluster [DBG] pgmap v358: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:06.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:06 vm04 bash[19581]: cluster 2026-03-09T14:27:04.986205+0000 mgr.x (mgr.14150) 446 : cluster [DBG] pgmap v358: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:06.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:06 vm03 bash[17524]: cluster 2026-03-09T14:27:04.986205+0000 mgr.x (mgr.14150) 446 : cluster [DBG] pgmap v358: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:06.552 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:06 vm03 bash[17524]: cluster 2026-03-09T14:27:04.986205+0000 mgr.x (mgr.14150) 446 : cluster [DBG] pgmap v358: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:08.211 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:27:07 vm03 bash[37744]: debug there is no tcmu-runner data available 2026-03-09T14:27:08.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:08 vm05 bash[20070]: cluster 2026-03-09T14:27:06.986499+0000 mgr.x (mgr.14150) 447 : cluster [DBG] pgmap v359: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:08.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:08 vm05 bash[20070]: cluster 2026-03-09T14:27:06.986499+0000 mgr.x (mgr.14150) 447 : cluster [DBG] pgmap v359: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:08.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:08 vm04 bash[19581]: cluster 2026-03-09T14:27:06.986499+0000 mgr.x (mgr.14150) 447 : cluster [DBG] pgmap v359: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:08.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:08 vm04 bash[19581]: cluster 2026-03-09T14:27:06.986499+0000 mgr.x (mgr.14150) 447 : cluster [DBG] pgmap v359: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:08.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:08 vm03 bash[17524]: cluster 2026-03-09T14:27:06.986499+0000 mgr.x (mgr.14150) 447 : cluster [DBG] pgmap v359: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:08.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:08 vm03 bash[17524]: cluster 2026-03-09T14:27:06.986499+0000 mgr.x (mgr.14150) 447 : cluster [DBG] pgmap v359: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:09.219 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:09 vm05 bash[20070]: audit 2026-03-09T14:27:07.853600+0000 mgr.x (mgr.14150) 448 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:09.219 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:09 vm05 bash[20070]: audit 2026-03-09T14:27:07.853600+0000 mgr.x (mgr.14150) 448 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:09.219 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:27:08 vm05 bash[38699]: debug there is no tcmu-runner data available 2026-03-09T14:27:09.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:09 vm04 bash[19581]: audit 2026-03-09T14:27:07.853600+0000 mgr.x (mgr.14150) 448 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:09.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:09 vm04 bash[19581]: audit 2026-03-09T14:27:07.853600+0000 mgr.x (mgr.14150) 448 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:09.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:09 vm03 bash[17524]: audit 2026-03-09T14:27:07.853600+0000 mgr.x (mgr.14150) 448 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:09.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:09 vm03 bash[17524]: audit 2026-03-09T14:27:07.853600+0000 mgr.x (mgr.14150) 448 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:10.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:10 vm05 bash[20070]: audit 2026-03-09T14:27:08.831007+0000 mgr.x (mgr.14150) 449 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:10.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:10 vm05 bash[20070]: audit 2026-03-09T14:27:08.831007+0000 mgr.x (mgr.14150) 449 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:10.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:10 vm05 bash[20070]: cluster 2026-03-09T14:27:08.986757+0000 mgr.x (mgr.14150) 450 : cluster [DBG] pgmap v360: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:10.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:10 vm05 bash[20070]: cluster 2026-03-09T14:27:08.986757+0000 mgr.x (mgr.14150) 450 : cluster [DBG] pgmap v360: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:10.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:10 vm04 bash[19581]: audit 2026-03-09T14:27:08.831007+0000 mgr.x (mgr.14150) 449 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:10.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:10 vm04 bash[19581]: audit 2026-03-09T14:27:08.831007+0000 mgr.x (mgr.14150) 449 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:10.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:10 vm04 bash[19581]: cluster 2026-03-09T14:27:08.986757+0000 mgr.x (mgr.14150) 450 : cluster [DBG] pgmap v360: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:10.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:10 vm04 bash[19581]: cluster 2026-03-09T14:27:08.986757+0000 mgr.x (mgr.14150) 450 : cluster [DBG] pgmap v360: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:10.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:10 vm03 bash[17524]: audit 2026-03-09T14:27:08.831007+0000 mgr.x (mgr.14150) 449 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:10.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:10 vm03 bash[17524]: audit 2026-03-09T14:27:08.831007+0000 mgr.x (mgr.14150) 449 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:10.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:10 vm03 bash[17524]: cluster 2026-03-09T14:27:08.986757+0000 mgr.x (mgr.14150) 450 : cluster [DBG] pgmap v360: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:10.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:10 vm03 bash[17524]: cluster 2026-03-09T14:27:08.986757+0000 mgr.x (mgr.14150) 450 : cluster [DBG] pgmap v360: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:12.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:12 vm05 bash[20070]: cluster 2026-03-09T14:27:10.987049+0000 mgr.x (mgr.14150) 451 : cluster [DBG] pgmap v361: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:12.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:12 vm05 bash[20070]: cluster 2026-03-09T14:27:10.987049+0000 mgr.x (mgr.14150) 451 : cluster [DBG] pgmap v361: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:12.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:12 vm04 bash[19581]: cluster 2026-03-09T14:27:10.987049+0000 mgr.x (mgr.14150) 451 : cluster [DBG] pgmap v361: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:12.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:12 vm04 bash[19581]: cluster 2026-03-09T14:27:10.987049+0000 mgr.x (mgr.14150) 451 : cluster [DBG] pgmap v361: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:12.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:12 vm03 bash[17524]: cluster 2026-03-09T14:27:10.987049+0000 mgr.x (mgr.14150) 451 : cluster [DBG] pgmap v361: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:12.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:12 vm03 bash[17524]: cluster 2026-03-09T14:27:10.987049+0000 mgr.x (mgr.14150) 451 : cluster [DBG] pgmap v361: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:14.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:14 vm05 bash[20070]: cluster 2026-03-09T14:27:12.987272+0000 mgr.x (mgr.14150) 452 : cluster [DBG] pgmap v362: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:14.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:14 vm05 bash[20070]: cluster 2026-03-09T14:27:12.987272+0000 mgr.x (mgr.14150) 452 : cluster [DBG] pgmap v362: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:14.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:14 vm04 bash[19581]: cluster 2026-03-09T14:27:12.987272+0000 mgr.x (mgr.14150) 452 : cluster [DBG] pgmap v362: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:14.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:14 vm04 bash[19581]: cluster 2026-03-09T14:27:12.987272+0000 mgr.x (mgr.14150) 452 : cluster [DBG] pgmap v362: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:14.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:14 vm03 bash[17524]: cluster 2026-03-09T14:27:12.987272+0000 mgr.x (mgr.14150) 452 : cluster [DBG] pgmap v362: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:14.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:14 vm03 bash[17524]: cluster 2026-03-09T14:27:12.987272+0000 mgr.x (mgr.14150) 452 : cluster [DBG] pgmap v362: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:16.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:16 vm04 bash[19581]: cluster 2026-03-09T14:27:14.987502+0000 mgr.x (mgr.14150) 453 : cluster [DBG] pgmap v363: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:16.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:16 vm04 bash[19581]: cluster 2026-03-09T14:27:14.987502+0000 mgr.x (mgr.14150) 453 : cluster [DBG] pgmap v363: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:16.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:16 vm05 bash[20070]: cluster 2026-03-09T14:27:14.987502+0000 mgr.x (mgr.14150) 453 : cluster [DBG] pgmap v363: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:16.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:16 vm05 bash[20070]: cluster 2026-03-09T14:27:14.987502+0000 mgr.x (mgr.14150) 453 : cluster [DBG] pgmap v363: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:16.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:16 vm03 bash[17524]: cluster 2026-03-09T14:27:14.987502+0000 mgr.x (mgr.14150) 453 : cluster [DBG] pgmap v363: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:16.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:16 vm03 bash[17524]: cluster 2026-03-09T14:27:14.987502+0000 mgr.x (mgr.14150) 453 : cluster [DBG] pgmap v363: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:18.240 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:27:17 vm03 bash[37744]: debug there is no tcmu-runner data available 2026-03-09T14:27:18.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:18 vm05 bash[20070]: cluster 2026-03-09T14:27:16.987754+0000 mgr.x (mgr.14150) 454 : cluster [DBG] pgmap v364: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:18.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:18 vm05 bash[20070]: cluster 2026-03-09T14:27:16.987754+0000 mgr.x (mgr.14150) 454 : cluster [DBG] pgmap v364: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:18.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:18 vm04 bash[19581]: cluster 2026-03-09T14:27:16.987754+0000 mgr.x (mgr.14150) 454 : cluster [DBG] pgmap v364: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:18.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:18 vm04 bash[19581]: cluster 2026-03-09T14:27:16.987754+0000 mgr.x (mgr.14150) 454 : cluster [DBG] pgmap v364: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:18.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:18 vm03 bash[17524]: cluster 2026-03-09T14:27:16.987754+0000 mgr.x (mgr.14150) 454 : cluster [DBG] pgmap v364: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:18.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:18 vm03 bash[17524]: cluster 2026-03-09T14:27:16.987754+0000 mgr.x (mgr.14150) 454 : cluster [DBG] pgmap v364: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:19.248 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:27:18 vm05 bash[38699]: debug there is no tcmu-runner data available 2026-03-09T14:27:19.248 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:19 vm05 bash[20070]: audit 2026-03-09T14:27:17.864190+0000 mgr.x (mgr.14150) 455 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:19.248 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:19 vm05 bash[20070]: audit 2026-03-09T14:27:17.864190+0000 mgr.x (mgr.14150) 455 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:19.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:19 vm04 bash[19581]: audit 2026-03-09T14:27:17.864190+0000 mgr.x (mgr.14150) 455 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:19.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:19 vm04 bash[19581]: audit 2026-03-09T14:27:17.864190+0000 mgr.x (mgr.14150) 455 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:19.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:19 vm03 bash[17524]: audit 2026-03-09T14:27:17.864190+0000 mgr.x (mgr.14150) 455 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:19.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:19 vm03 bash[17524]: audit 2026-03-09T14:27:17.864190+0000 mgr.x (mgr.14150) 455 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:20.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:20 vm05 bash[20070]: audit 2026-03-09T14:27:18.837943+0000 mgr.x (mgr.14150) 456 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:20.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:20 vm05 bash[20070]: audit 2026-03-09T14:27:18.837943+0000 mgr.x (mgr.14150) 456 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:20.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:20 vm05 bash[20070]: cluster 2026-03-09T14:27:18.988039+0000 mgr.x (mgr.14150) 457 : cluster [DBG] pgmap v365: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:20.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:20 vm05 bash[20070]: cluster 2026-03-09T14:27:18.988039+0000 mgr.x (mgr.14150) 457 : cluster [DBG] pgmap v365: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:20.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:20 vm04 bash[19581]: audit 2026-03-09T14:27:18.837943+0000 mgr.x (mgr.14150) 456 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:20.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:20 vm04 bash[19581]: audit 2026-03-09T14:27:18.837943+0000 mgr.x (mgr.14150) 456 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:20.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:20 vm04 bash[19581]: cluster 2026-03-09T14:27:18.988039+0000 mgr.x (mgr.14150) 457 : cluster [DBG] pgmap v365: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:20.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:20 vm04 bash[19581]: cluster 2026-03-09T14:27:18.988039+0000 mgr.x (mgr.14150) 457 : cluster [DBG] pgmap v365: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:20.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:20 vm03 bash[17524]: audit 2026-03-09T14:27:18.837943+0000 mgr.x (mgr.14150) 456 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:20.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:20 vm03 bash[17524]: audit 2026-03-09T14:27:18.837943+0000 mgr.x (mgr.14150) 456 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:20.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:20 vm03 bash[17524]: cluster 2026-03-09T14:27:18.988039+0000 mgr.x (mgr.14150) 457 : cluster [DBG] pgmap v365: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:20.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:20 vm03 bash[17524]: cluster 2026-03-09T14:27:18.988039+0000 mgr.x (mgr.14150) 457 : cluster [DBG] pgmap v365: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:22.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:22 vm05 bash[20070]: cluster 2026-03-09T14:27:20.988335+0000 mgr.x (mgr.14150) 458 : cluster [DBG] pgmap v366: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:22.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:22 vm05 bash[20070]: cluster 2026-03-09T14:27:20.988335+0000 mgr.x (mgr.14150) 458 : cluster [DBG] pgmap v366: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:22.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:22 vm04 bash[19581]: cluster 2026-03-09T14:27:20.988335+0000 mgr.x (mgr.14150) 458 : cluster [DBG] pgmap v366: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:22.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:22 vm04 bash[19581]: cluster 2026-03-09T14:27:20.988335+0000 mgr.x (mgr.14150) 458 : cluster [DBG] pgmap v366: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:22.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:22 vm03 bash[17524]: cluster 2026-03-09T14:27:20.988335+0000 mgr.x (mgr.14150) 458 : cluster [DBG] pgmap v366: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:22.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:22 vm03 bash[17524]: cluster 2026-03-09T14:27:20.988335+0000 mgr.x (mgr.14150) 458 : cluster [DBG] pgmap v366: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:24.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:24 vm03 bash[17524]: cluster 2026-03-09T14:27:22.988599+0000 mgr.x (mgr.14150) 459 : cluster [DBG] pgmap v367: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:24.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:24 vm03 bash[17524]: cluster 2026-03-09T14:27:22.988599+0000 mgr.x (mgr.14150) 459 : cluster [DBG] pgmap v367: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:24.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:24 vm05 bash[20070]: cluster 2026-03-09T14:27:22.988599+0000 mgr.x (mgr.14150) 459 : cluster [DBG] pgmap v367: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:24.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:24 vm05 bash[20070]: cluster 2026-03-09T14:27:22.988599+0000 mgr.x (mgr.14150) 459 : cluster [DBG] pgmap v367: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:24.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:24 vm04 bash[19581]: cluster 2026-03-09T14:27:22.988599+0000 mgr.x (mgr.14150) 459 : cluster [DBG] pgmap v367: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:24.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:24 vm04 bash[19581]: cluster 2026-03-09T14:27:22.988599+0000 mgr.x (mgr.14150) 459 : cluster [DBG] pgmap v367: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:26.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:26 vm03 bash[17524]: cluster 2026-03-09T14:27:24.988867+0000 mgr.x (mgr.14150) 460 : cluster [DBG] pgmap v368: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:26.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:26 vm03 bash[17524]: cluster 2026-03-09T14:27:24.988867+0000 mgr.x (mgr.14150) 460 : cluster [DBG] pgmap v368: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:26.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:26 vm05 bash[20070]: cluster 2026-03-09T14:27:24.988867+0000 mgr.x (mgr.14150) 460 : cluster [DBG] pgmap v368: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:26.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:26 vm05 bash[20070]: cluster 2026-03-09T14:27:24.988867+0000 mgr.x (mgr.14150) 460 : cluster [DBG] pgmap v368: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:26.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:26 vm04 bash[19581]: cluster 2026-03-09T14:27:24.988867+0000 mgr.x (mgr.14150) 460 : cluster [DBG] pgmap v368: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:26.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:26 vm04 bash[19581]: cluster 2026-03-09T14:27:24.988867+0000 mgr.x (mgr.14150) 460 : cluster [DBG] pgmap v368: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:28.006 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:27 vm05 bash[20070]: cluster 2026-03-09T14:27:26.989190+0000 mgr.x (mgr.14150) 461 : cluster [DBG] pgmap v369: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:28.006 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:27 vm05 bash[20070]: cluster 2026-03-09T14:27:26.989190+0000 mgr.x (mgr.14150) 461 : cluster [DBG] pgmap v369: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:28.006 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:27 vm04 bash[19581]: cluster 2026-03-09T14:27:26.989190+0000 mgr.x (mgr.14150) 461 : cluster [DBG] pgmap v369: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:28.006 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:27 vm04 bash[19581]: cluster 2026-03-09T14:27:26.989190+0000 mgr.x (mgr.14150) 461 : cluster [DBG] pgmap v369: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:28.051 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:27:27 vm03 bash[37744]: debug there is no tcmu-runner data available 2026-03-09T14:27:28.051 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:27 vm03 bash[17524]: cluster 2026-03-09T14:27:26.989190+0000 mgr.x (mgr.14150) 461 : cluster [DBG] pgmap v369: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:28.051 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:27 vm03 bash[17524]: cluster 2026-03-09T14:27:26.989190+0000 mgr.x (mgr.14150) 461 : cluster [DBG] pgmap v369: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:29.006 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:28 vm05 bash[20070]: audit 2026-03-09T14:27:27.868138+0000 mgr.x (mgr.14150) 462 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:29.006 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:28 vm05 bash[20070]: audit 2026-03-09T14:27:27.868138+0000 mgr.x (mgr.14150) 462 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:29.006 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:27:28 vm05 bash[38699]: debug there is no tcmu-runner data available 2026-03-09T14:27:29.006 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:28 vm04 bash[19581]: audit 2026-03-09T14:27:27.868138+0000 mgr.x (mgr.14150) 462 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:29.006 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:28 vm04 bash[19581]: audit 2026-03-09T14:27:27.868138+0000 mgr.x (mgr.14150) 462 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:29.051 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:28 vm03 bash[17524]: audit 2026-03-09T14:27:27.868138+0000 mgr.x (mgr.14150) 462 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:29.051 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:28 vm03 bash[17524]: audit 2026-03-09T14:27:27.868138+0000 mgr.x (mgr.14150) 462 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:30.006 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:29 vm05 bash[20070]: audit 2026-03-09T14:27:28.845991+0000 mgr.x (mgr.14150) 463 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:30.006 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:29 vm05 bash[20070]: audit 2026-03-09T14:27:28.845991+0000 mgr.x (mgr.14150) 463 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:30.006 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:29 vm05 bash[20070]: cluster 2026-03-09T14:27:28.989517+0000 mgr.x (mgr.14150) 464 : cluster [DBG] pgmap v370: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:30.006 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:29 vm05 bash[20070]: cluster 2026-03-09T14:27:28.989517+0000 mgr.x (mgr.14150) 464 : cluster [DBG] pgmap v370: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:30.006 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:29 vm04 bash[19581]: audit 2026-03-09T14:27:28.845991+0000 mgr.x (mgr.14150) 463 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:30.006 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:29 vm04 bash[19581]: audit 2026-03-09T14:27:28.845991+0000 mgr.x (mgr.14150) 463 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:30.006 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:29 vm04 bash[19581]: cluster 2026-03-09T14:27:28.989517+0000 mgr.x (mgr.14150) 464 : cluster [DBG] pgmap v370: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:30.006 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:29 vm04 bash[19581]: cluster 2026-03-09T14:27:28.989517+0000 mgr.x (mgr.14150) 464 : cluster [DBG] pgmap v370: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:30.051 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:29 vm03 bash[17524]: audit 2026-03-09T14:27:28.845991+0000 mgr.x (mgr.14150) 463 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:30.051 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:29 vm03 bash[17524]: audit 2026-03-09T14:27:28.845991+0000 mgr.x (mgr.14150) 463 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:30.051 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:29 vm03 bash[17524]: cluster 2026-03-09T14:27:28.989517+0000 mgr.x (mgr.14150) 464 : cluster [DBG] pgmap v370: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:30.051 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:29 vm03 bash[17524]: cluster 2026-03-09T14:27:28.989517+0000 mgr.x (mgr.14150) 464 : cluster [DBG] pgmap v370: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:32.301 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:32 vm03 bash[17524]: cluster 2026-03-09T14:27:30.989795+0000 mgr.x (mgr.14150) 465 : cluster [DBG] pgmap v371: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:32.301 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:32 vm03 bash[17524]: cluster 2026-03-09T14:27:30.989795+0000 mgr.x (mgr.14150) 465 : cluster [DBG] pgmap v371: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:32.505 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:32 vm05 bash[20070]: cluster 2026-03-09T14:27:30.989795+0000 mgr.x (mgr.14150) 465 : cluster [DBG] pgmap v371: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:32.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:32 vm05 bash[20070]: cluster 2026-03-09T14:27:30.989795+0000 mgr.x (mgr.14150) 465 : cluster [DBG] pgmap v371: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:32.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:32 vm04 bash[19581]: cluster 2026-03-09T14:27:30.989795+0000 mgr.x (mgr.14150) 465 : cluster [DBG] pgmap v371: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:32.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:32 vm04 bash[19581]: cluster 2026-03-09T14:27:30.989795+0000 mgr.x (mgr.14150) 465 : cluster [DBG] pgmap v371: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:34.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:34 vm05 bash[20070]: cluster 2026-03-09T14:27:32.990080+0000 mgr.x (mgr.14150) 466 : cluster [DBG] pgmap v372: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:34.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:34 vm05 bash[20070]: cluster 2026-03-09T14:27:32.990080+0000 mgr.x (mgr.14150) 466 : cluster [DBG] pgmap v372: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:34.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:34 vm04 bash[19581]: cluster 2026-03-09T14:27:32.990080+0000 mgr.x (mgr.14150) 466 : cluster [DBG] pgmap v372: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:34.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:34 vm04 bash[19581]: cluster 2026-03-09T14:27:32.990080+0000 mgr.x (mgr.14150) 466 : cluster [DBG] pgmap v372: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:34.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:34 vm03 bash[17524]: cluster 2026-03-09T14:27:32.990080+0000 mgr.x (mgr.14150) 466 : cluster [DBG] pgmap v372: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:34.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:34 vm03 bash[17524]: cluster 2026-03-09T14:27:32.990080+0000 mgr.x (mgr.14150) 466 : cluster [DBG] pgmap v372: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:36.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:36 vm05 bash[20070]: cluster 2026-03-09T14:27:34.990353+0000 mgr.x (mgr.14150) 467 : cluster [DBG] pgmap v373: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:36.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:36 vm05 bash[20070]: cluster 2026-03-09T14:27:34.990353+0000 mgr.x (mgr.14150) 467 : cluster [DBG] pgmap v373: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:36.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:36 vm04 bash[19581]: cluster 2026-03-09T14:27:34.990353+0000 mgr.x (mgr.14150) 467 : cluster [DBG] pgmap v373: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:36.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:36 vm04 bash[19581]: cluster 2026-03-09T14:27:34.990353+0000 mgr.x (mgr.14150) 467 : cluster [DBG] pgmap v373: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:36.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:36 vm03 bash[17524]: cluster 2026-03-09T14:27:34.990353+0000 mgr.x (mgr.14150) 467 : cluster [DBG] pgmap v373: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:36.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:36 vm03 bash[17524]: cluster 2026-03-09T14:27:34.990353+0000 mgr.x (mgr.14150) 467 : cluster [DBG] pgmap v373: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:38.301 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:27:37 vm03 bash[37744]: debug there is no tcmu-runner data available 2026-03-09T14:27:38.301 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:38 vm03 bash[17524]: cluster 2026-03-09T14:27:36.990594+0000 mgr.x (mgr.14150) 468 : cluster [DBG] pgmap v374: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:38.301 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:38 vm03 bash[17524]: cluster 2026-03-09T14:27:36.990594+0000 mgr.x (mgr.14150) 468 : cluster [DBG] pgmap v374: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:38.505 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:38 vm05 bash[20070]: cluster 2026-03-09T14:27:36.990594+0000 mgr.x (mgr.14150) 468 : cluster [DBG] pgmap v374: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:38.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:38 vm05 bash[20070]: cluster 2026-03-09T14:27:36.990594+0000 mgr.x (mgr.14150) 468 : cluster [DBG] pgmap v374: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:38.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:38 vm04 bash[19581]: cluster 2026-03-09T14:27:36.990594+0000 mgr.x (mgr.14150) 468 : cluster [DBG] pgmap v374: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:38.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:38 vm04 bash[19581]: cluster 2026-03-09T14:27:36.990594+0000 mgr.x (mgr.14150) 468 : cluster [DBG] pgmap v374: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:39.255 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:39 vm05 bash[20070]: audit 2026-03-09T14:27:37.878619+0000 mgr.x (mgr.14150) 469 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:39.255 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:39 vm05 bash[20070]: audit 2026-03-09T14:27:37.878619+0000 mgr.x (mgr.14150) 469 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:39.256 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:27:38 vm05 bash[38699]: debug there is no tcmu-runner data available 2026-03-09T14:27:39.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:39 vm04 bash[19581]: audit 2026-03-09T14:27:37.878619+0000 mgr.x (mgr.14150) 469 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:39.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:39 vm04 bash[19581]: audit 2026-03-09T14:27:37.878619+0000 mgr.x (mgr.14150) 469 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:39.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:39 vm03 bash[17524]: audit 2026-03-09T14:27:37.878619+0000 mgr.x (mgr.14150) 469 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:39.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:39 vm03 bash[17524]: audit 2026-03-09T14:27:37.878619+0000 mgr.x (mgr.14150) 469 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:40.505 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:40 vm05 bash[20070]: audit 2026-03-09T14:27:38.850255+0000 mgr.x (mgr.14150) 470 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:40.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:40 vm05 bash[20070]: audit 2026-03-09T14:27:38.850255+0000 mgr.x (mgr.14150) 470 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:40.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:40 vm05 bash[20070]: cluster 2026-03-09T14:27:38.990866+0000 mgr.x (mgr.14150) 471 : cluster [DBG] pgmap v375: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:40.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:40 vm05 bash[20070]: cluster 2026-03-09T14:27:38.990866+0000 mgr.x (mgr.14150) 471 : cluster [DBG] pgmap v375: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:40.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:40 vm04 bash[19581]: audit 2026-03-09T14:27:38.850255+0000 mgr.x (mgr.14150) 470 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:40.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:40 vm04 bash[19581]: audit 2026-03-09T14:27:38.850255+0000 mgr.x (mgr.14150) 470 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:40.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:40 vm04 bash[19581]: cluster 2026-03-09T14:27:38.990866+0000 mgr.x (mgr.14150) 471 : cluster [DBG] pgmap v375: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:40.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:40 vm04 bash[19581]: cluster 2026-03-09T14:27:38.990866+0000 mgr.x (mgr.14150) 471 : cluster [DBG] pgmap v375: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:40.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:40 vm03 bash[17524]: audit 2026-03-09T14:27:38.850255+0000 mgr.x (mgr.14150) 470 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:40.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:40 vm03 bash[17524]: audit 2026-03-09T14:27:38.850255+0000 mgr.x (mgr.14150) 470 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:40.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:40 vm03 bash[17524]: cluster 2026-03-09T14:27:38.990866+0000 mgr.x (mgr.14150) 471 : cluster [DBG] pgmap v375: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:40.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:40 vm03 bash[17524]: cluster 2026-03-09T14:27:38.990866+0000 mgr.x (mgr.14150) 471 : cluster [DBG] pgmap v375: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:42.505 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:42 vm05 bash[20070]: cluster 2026-03-09T14:27:40.991182+0000 mgr.x (mgr.14150) 472 : cluster [DBG] pgmap v376: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:42.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:42 vm05 bash[20070]: cluster 2026-03-09T14:27:40.991182+0000 mgr.x (mgr.14150) 472 : cluster [DBG] pgmap v376: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:42.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:42 vm04 bash[19581]: cluster 2026-03-09T14:27:40.991182+0000 mgr.x (mgr.14150) 472 : cluster [DBG] pgmap v376: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:42.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:42 vm04 bash[19581]: cluster 2026-03-09T14:27:40.991182+0000 mgr.x (mgr.14150) 472 : cluster [DBG] pgmap v376: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:42.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:42 vm03 bash[17524]: cluster 2026-03-09T14:27:40.991182+0000 mgr.x (mgr.14150) 472 : cluster [DBG] pgmap v376: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:42.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:42 vm03 bash[17524]: cluster 2026-03-09T14:27:40.991182+0000 mgr.x (mgr.14150) 472 : cluster [DBG] pgmap v376: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:44.505 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:44 vm05 bash[20070]: cluster 2026-03-09T14:27:42.991464+0000 mgr.x (mgr.14150) 473 : cluster [DBG] pgmap v377: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:44.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:44 vm05 bash[20070]: cluster 2026-03-09T14:27:42.991464+0000 mgr.x (mgr.14150) 473 : cluster [DBG] pgmap v377: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:44.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:44 vm04 bash[19581]: cluster 2026-03-09T14:27:42.991464+0000 mgr.x (mgr.14150) 473 : cluster [DBG] pgmap v377: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:44.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:44 vm04 bash[19581]: cluster 2026-03-09T14:27:42.991464+0000 mgr.x (mgr.14150) 473 : cluster [DBG] pgmap v377: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:44.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:44 vm03 bash[17524]: cluster 2026-03-09T14:27:42.991464+0000 mgr.x (mgr.14150) 473 : cluster [DBG] pgmap v377: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:44.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:44 vm03 bash[17524]: cluster 2026-03-09T14:27:42.991464+0000 mgr.x (mgr.14150) 473 : cluster [DBG] pgmap v377: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:46.505 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:46 vm05 bash[20070]: cluster 2026-03-09T14:27:44.991765+0000 mgr.x (mgr.14150) 474 : cluster [DBG] pgmap v378: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:46.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:46 vm05 bash[20070]: cluster 2026-03-09T14:27:44.991765+0000 mgr.x (mgr.14150) 474 : cluster [DBG] pgmap v378: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:46.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:46 vm04 bash[19581]: cluster 2026-03-09T14:27:44.991765+0000 mgr.x (mgr.14150) 474 : cluster [DBG] pgmap v378: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:46.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:46 vm04 bash[19581]: cluster 2026-03-09T14:27:44.991765+0000 mgr.x (mgr.14150) 474 : cluster [DBG] pgmap v378: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:46.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:46 vm03 bash[17524]: cluster 2026-03-09T14:27:44.991765+0000 mgr.x (mgr.14150) 474 : cluster [DBG] pgmap v378: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:46.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:46 vm03 bash[17524]: cluster 2026-03-09T14:27:44.991765+0000 mgr.x (mgr.14150) 474 : cluster [DBG] pgmap v378: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:48.301 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:27:47 vm03 bash[37744]: debug there is no tcmu-runner data available 2026-03-09T14:27:48.301 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:48 vm03 bash[17524]: cluster 2026-03-09T14:27:46.992064+0000 mgr.x (mgr.14150) 475 : cluster [DBG] pgmap v379: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:48.301 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:48 vm03 bash[17524]: cluster 2026-03-09T14:27:46.992064+0000 mgr.x (mgr.14150) 475 : cluster [DBG] pgmap v379: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:48.505 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:48 vm04 bash[19581]: cluster 2026-03-09T14:27:46.992064+0000 mgr.x (mgr.14150) 475 : cluster [DBG] pgmap v379: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:48.505 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:48 vm04 bash[19581]: cluster 2026-03-09T14:27:46.992064+0000 mgr.x (mgr.14150) 475 : cluster [DBG] pgmap v379: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:48.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:48 vm05 bash[20070]: cluster 2026-03-09T14:27:46.992064+0000 mgr.x (mgr.14150) 475 : cluster [DBG] pgmap v379: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:48.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:48 vm05 bash[20070]: cluster 2026-03-09T14:27:46.992064+0000 mgr.x (mgr.14150) 475 : cluster [DBG] pgmap v379: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:49.255 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:27:48 vm05 bash[38699]: debug there is no tcmu-runner data available 2026-03-09T14:27:49.255 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:49 vm05 bash[20070]: audit 2026-03-09T14:27:47.889271+0000 mgr.x (mgr.14150) 476 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:49.256 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:49 vm05 bash[20070]: audit 2026-03-09T14:27:47.889271+0000 mgr.x (mgr.14150) 476 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:49.505 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:49 vm04 bash[19581]: audit 2026-03-09T14:27:47.889271+0000 mgr.x (mgr.14150) 476 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:49.505 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:49 vm04 bash[19581]: audit 2026-03-09T14:27:47.889271+0000 mgr.x (mgr.14150) 476 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:49.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:49 vm03 bash[17524]: audit 2026-03-09T14:27:47.889271+0000 mgr.x (mgr.14150) 476 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:49.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:49 vm03 bash[17524]: audit 2026-03-09T14:27:47.889271+0000 mgr.x (mgr.14150) 476 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:50.505 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:50 vm04 bash[19581]: audit 2026-03-09T14:27:48.853207+0000 mgr.x (mgr.14150) 477 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:50.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:50 vm04 bash[19581]: audit 2026-03-09T14:27:48.853207+0000 mgr.x (mgr.14150) 477 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:50.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:50 vm04 bash[19581]: cluster 2026-03-09T14:27:48.992394+0000 mgr.x (mgr.14150) 478 : cluster [DBG] pgmap v380: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:50.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:50 vm04 bash[19581]: cluster 2026-03-09T14:27:48.992394+0000 mgr.x (mgr.14150) 478 : cluster [DBG] pgmap v380: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:50.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:50 vm05 bash[20070]: audit 2026-03-09T14:27:48.853207+0000 mgr.x (mgr.14150) 477 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:50.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:50 vm05 bash[20070]: audit 2026-03-09T14:27:48.853207+0000 mgr.x (mgr.14150) 477 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:50.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:50 vm05 bash[20070]: cluster 2026-03-09T14:27:48.992394+0000 mgr.x (mgr.14150) 478 : cluster [DBG] pgmap v380: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:50.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:50 vm05 bash[20070]: cluster 2026-03-09T14:27:48.992394+0000 mgr.x (mgr.14150) 478 : cluster [DBG] pgmap v380: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:50.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:50 vm03 bash[17524]: audit 2026-03-09T14:27:48.853207+0000 mgr.x (mgr.14150) 477 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:50.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:50 vm03 bash[17524]: audit 2026-03-09T14:27:48.853207+0000 mgr.x (mgr.14150) 477 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:50.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:50 vm03 bash[17524]: cluster 2026-03-09T14:27:48.992394+0000 mgr.x (mgr.14150) 478 : cluster [DBG] pgmap v380: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:50.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:50 vm03 bash[17524]: cluster 2026-03-09T14:27:48.992394+0000 mgr.x (mgr.14150) 478 : cluster [DBG] pgmap v380: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:52.505 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:52 vm05 bash[20070]: cluster 2026-03-09T14:27:50.992651+0000 mgr.x (mgr.14150) 479 : cluster [DBG] pgmap v381: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:52.505 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:52 vm05 bash[20070]: cluster 2026-03-09T14:27:50.992651+0000 mgr.x (mgr.14150) 479 : cluster [DBG] pgmap v381: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:52.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:52 vm04 bash[19581]: cluster 2026-03-09T14:27:50.992651+0000 mgr.x (mgr.14150) 479 : cluster [DBG] pgmap v381: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:52.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:52 vm04 bash[19581]: cluster 2026-03-09T14:27:50.992651+0000 mgr.x (mgr.14150) 479 : cluster [DBG] pgmap v381: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:52.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:52 vm03 bash[17524]: cluster 2026-03-09T14:27:50.992651+0000 mgr.x (mgr.14150) 479 : cluster [DBG] pgmap v381: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:52.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:52 vm03 bash[17524]: cluster 2026-03-09T14:27:50.992651+0000 mgr.x (mgr.14150) 479 : cluster [DBG] pgmap v381: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:54.505 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:54 vm04 bash[19581]: cluster 2026-03-09T14:27:52.992880+0000 mgr.x (mgr.14150) 480 : cluster [DBG] pgmap v382: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:54.505 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:54 vm04 bash[19581]: cluster 2026-03-09T14:27:52.992880+0000 mgr.x (mgr.14150) 480 : cluster [DBG] pgmap v382: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:54.505 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:54 vm05 bash[20070]: cluster 2026-03-09T14:27:52.992880+0000 mgr.x (mgr.14150) 480 : cluster [DBG] pgmap v382: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:54.505 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:54 vm05 bash[20070]: cluster 2026-03-09T14:27:52.992880+0000 mgr.x (mgr.14150) 480 : cluster [DBG] pgmap v382: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:54.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:54 vm03 bash[17524]: cluster 2026-03-09T14:27:52.992880+0000 mgr.x (mgr.14150) 480 : cluster [DBG] pgmap v382: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:54.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:54 vm03 bash[17524]: cluster 2026-03-09T14:27:52.992880+0000 mgr.x (mgr.14150) 480 : cluster [DBG] pgmap v382: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:55.505 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:55 vm04 bash[19581]: audit 2026-03-09T14:27:54.948356+0000 mon.a (mon.0) 750 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:27:55.505 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:55 vm04 bash[19581]: audit 2026-03-09T14:27:54.948356+0000 mon.a (mon.0) 750 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:27:55.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:55 vm05 bash[20070]: audit 2026-03-09T14:27:54.948356+0000 mon.a (mon.0) 750 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:27:55.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:55 vm05 bash[20070]: audit 2026-03-09T14:27:54.948356+0000 mon.a (mon.0) 750 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:27:55.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:55 vm03 bash[17524]: audit 2026-03-09T14:27:54.948356+0000 mon.a (mon.0) 750 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:27:55.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:55 vm03 bash[17524]: audit 2026-03-09T14:27:54.948356+0000 mon.a (mon.0) 750 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:27:56.505 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:56 vm05 bash[20070]: cluster 2026-03-09T14:27:54.993120+0000 mgr.x (mgr.14150) 481 : cluster [DBG] pgmap v383: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:56.505 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:56 vm05 bash[20070]: cluster 2026-03-09T14:27:54.993120+0000 mgr.x (mgr.14150) 481 : cluster [DBG] pgmap v383: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:56.505 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:56 vm05 bash[20070]: audit 2026-03-09T14:27:55.240220+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:27:56.505 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:56 vm05 bash[20070]: audit 2026-03-09T14:27:55.240220+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:27:56.505 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:56 vm05 bash[20070]: audit 2026-03-09T14:27:55.244609+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:27:56.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:56 vm05 bash[20070]: audit 2026-03-09T14:27:55.244609+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:27:56.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:56 vm05 bash[20070]: audit 2026-03-09T14:27:55.248897+0000 mon.a (mon.0) 753 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:27:56.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:56 vm05 bash[20070]: audit 2026-03-09T14:27:55.248897+0000 mon.a (mon.0) 753 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:27:56.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:56 vm05 bash[20070]: audit 2026-03-09T14:27:55.253696+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:27:56.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:56 vm05 bash[20070]: audit 2026-03-09T14:27:55.253696+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:27:56.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:56 vm05 bash[20070]: audit 2026-03-09T14:27:55.576941+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:27:56.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:56 vm05 bash[20070]: audit 2026-03-09T14:27:55.576941+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:27:56.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:56 vm05 bash[20070]: audit 2026-03-09T14:27:55.581272+0000 mon.a (mon.0) 756 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:27:56.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:56 vm05 bash[20070]: audit 2026-03-09T14:27:55.581272+0000 mon.a (mon.0) 756 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:27:56.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:56 vm05 bash[20070]: audit 2026-03-09T14:27:56.009777+0000 mon.a (mon.0) 757 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:27:56.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:56 vm05 bash[20070]: audit 2026-03-09T14:27:56.009777+0000 mon.a (mon.0) 757 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:27:56.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:56 vm05 bash[20070]: audit 2026-03-09T14:27:56.010422+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:27:56.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:56 vm05 bash[20070]: audit 2026-03-09T14:27:56.010422+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:27:56.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:56 vm05 bash[20070]: audit 2026-03-09T14:27:56.015010+0000 mon.a (mon.0) 759 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:27:56.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:56 vm05 bash[20070]: audit 2026-03-09T14:27:56.015010+0000 mon.a (mon.0) 759 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:27:56.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:56 vm04 bash[19581]: cluster 2026-03-09T14:27:54.993120+0000 mgr.x (mgr.14150) 481 : cluster [DBG] pgmap v383: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:56.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:56 vm04 bash[19581]: cluster 2026-03-09T14:27:54.993120+0000 mgr.x (mgr.14150) 481 : cluster [DBG] pgmap v383: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:56.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:56 vm04 bash[19581]: audit 2026-03-09T14:27:55.240220+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:27:56.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:56 vm04 bash[19581]: audit 2026-03-09T14:27:55.240220+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:27:56.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:56 vm04 bash[19581]: audit 2026-03-09T14:27:55.244609+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:27:56.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:56 vm04 bash[19581]: audit 2026-03-09T14:27:55.244609+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:27:56.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:56 vm04 bash[19581]: audit 2026-03-09T14:27:55.248897+0000 mon.a (mon.0) 753 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:27:56.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:56 vm04 bash[19581]: audit 2026-03-09T14:27:55.248897+0000 mon.a (mon.0) 753 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:27:56.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:56 vm04 bash[19581]: audit 2026-03-09T14:27:55.253696+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:27:56.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:56 vm04 bash[19581]: audit 2026-03-09T14:27:55.253696+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:27:56.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:56 vm04 bash[19581]: audit 2026-03-09T14:27:55.576941+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:27:56.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:56 vm04 bash[19581]: audit 2026-03-09T14:27:55.576941+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:27:56.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:56 vm04 bash[19581]: audit 2026-03-09T14:27:55.581272+0000 mon.a (mon.0) 756 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:27:56.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:56 vm04 bash[19581]: audit 2026-03-09T14:27:55.581272+0000 mon.a (mon.0) 756 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:27:56.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:56 vm04 bash[19581]: audit 2026-03-09T14:27:56.009777+0000 mon.a (mon.0) 757 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:27:56.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:56 vm04 bash[19581]: audit 2026-03-09T14:27:56.009777+0000 mon.a (mon.0) 757 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:27:56.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:56 vm04 bash[19581]: audit 2026-03-09T14:27:56.010422+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:27:56.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:56 vm04 bash[19581]: audit 2026-03-09T14:27:56.010422+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:27:56.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:56 vm04 bash[19581]: audit 2026-03-09T14:27:56.015010+0000 mon.a (mon.0) 759 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:27:56.506 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:56 vm04 bash[19581]: audit 2026-03-09T14:27:56.015010+0000 mon.a (mon.0) 759 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:27:56.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:56 vm03 bash[17524]: cluster 2026-03-09T14:27:54.993120+0000 mgr.x (mgr.14150) 481 : cluster [DBG] pgmap v383: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:56.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:56 vm03 bash[17524]: cluster 2026-03-09T14:27:54.993120+0000 mgr.x (mgr.14150) 481 : cluster [DBG] pgmap v383: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:27:56.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:56 vm03 bash[17524]: audit 2026-03-09T14:27:55.240220+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:27:56.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:56 vm03 bash[17524]: audit 2026-03-09T14:27:55.240220+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:27:56.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:56 vm03 bash[17524]: audit 2026-03-09T14:27:55.244609+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:27:56.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:56 vm03 bash[17524]: audit 2026-03-09T14:27:55.244609+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:27:56.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:56 vm03 bash[17524]: audit 2026-03-09T14:27:55.248897+0000 mon.a (mon.0) 753 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:27:56.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:56 vm03 bash[17524]: audit 2026-03-09T14:27:55.248897+0000 mon.a (mon.0) 753 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:27:56.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:56 vm03 bash[17524]: audit 2026-03-09T14:27:55.253696+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:27:56.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:56 vm03 bash[17524]: audit 2026-03-09T14:27:55.253696+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:27:56.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:56 vm03 bash[17524]: audit 2026-03-09T14:27:55.576941+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:27:56.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:56 vm03 bash[17524]: audit 2026-03-09T14:27:55.576941+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:27:56.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:56 vm03 bash[17524]: audit 2026-03-09T14:27:55.581272+0000 mon.a (mon.0) 756 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:27:56.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:56 vm03 bash[17524]: audit 2026-03-09T14:27:55.581272+0000 mon.a (mon.0) 756 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:27:56.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:56 vm03 bash[17524]: audit 2026-03-09T14:27:56.009777+0000 mon.a (mon.0) 757 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:27:56.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:56 vm03 bash[17524]: audit 2026-03-09T14:27:56.009777+0000 mon.a (mon.0) 757 : audit [DBG] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:27:56.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:56 vm03 bash[17524]: audit 2026-03-09T14:27:56.010422+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:27:56.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:56 vm03 bash[17524]: audit 2026-03-09T14:27:56.010422+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:27:56.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:56 vm03 bash[17524]: audit 2026-03-09T14:27:56.015010+0000 mon.a (mon.0) 759 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:27:56.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:56 vm03 bash[17524]: audit 2026-03-09T14:27:56.015010+0000 mon.a (mon.0) 759 : audit [INF] from='mgr.14150 192.168.123.103:0/2330119035' entity='mgr.x' 2026-03-09T14:27:58.245 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:27:57 vm03 bash[37744]: debug there is no tcmu-runner data available 2026-03-09T14:27:58.505 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:58 vm05 bash[20070]: cluster 2026-03-09T14:27:56.993363+0000 mgr.x (mgr.14150) 482 : cluster [DBG] pgmap v384: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:58.505 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:58 vm05 bash[20070]: cluster 2026-03-09T14:27:56.993363+0000 mgr.x (mgr.14150) 482 : cluster [DBG] pgmap v384: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:58.505 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:58 vm04 bash[19581]: cluster 2026-03-09T14:27:56.993363+0000 mgr.x (mgr.14150) 482 : cluster [DBG] pgmap v384: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:58.505 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:58 vm04 bash[19581]: cluster 2026-03-09T14:27:56.993363+0000 mgr.x (mgr.14150) 482 : cluster [DBG] pgmap v384: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:58.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:58 vm03 bash[17524]: cluster 2026-03-09T14:27:56.993363+0000 mgr.x (mgr.14150) 482 : cluster [DBG] pgmap v384: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:58.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:58 vm03 bash[17524]: cluster 2026-03-09T14:27:56.993363+0000 mgr.x (mgr.14150) 482 : cluster [DBG] pgmap v384: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:27:59.253 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:27:58 vm05 bash[38699]: debug there is no tcmu-runner data available 2026-03-09T14:27:59.505 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:59 vm05 bash[20070]: audit 2026-03-09T14:27:57.890799+0000 mgr.x (mgr.14150) 483 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:59.505 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:27:59 vm05 bash[20070]: audit 2026-03-09T14:27:57.890799+0000 mgr.x (mgr.14150) 483 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:59.505 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:59 vm04 bash[19581]: audit 2026-03-09T14:27:57.890799+0000 mgr.x (mgr.14150) 483 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:59.505 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:27:59 vm04 bash[19581]: audit 2026-03-09T14:27:57.890799+0000 mgr.x (mgr.14150) 483 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:59.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:59 vm03 bash[17524]: audit 2026-03-09T14:27:57.890799+0000 mgr.x (mgr.14150) 483 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:27:59.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:27:59 vm03 bash[17524]: audit 2026-03-09T14:27:57.890799+0000 mgr.x (mgr.14150) 483 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:00.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:00 vm03 bash[17524]: audit 2026-03-09T14:27:58.862156+0000 mgr.x (mgr.14150) 484 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:00.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:00 vm03 bash[17524]: audit 2026-03-09T14:27:58.862156+0000 mgr.x (mgr.14150) 484 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:00.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:00 vm03 bash[17524]: cluster 2026-03-09T14:27:58.993570+0000 mgr.x (mgr.14150) 485 : cluster [DBG] pgmap v385: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:28:00.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:00 vm03 bash[17524]: cluster 2026-03-09T14:27:58.993570+0000 mgr.x (mgr.14150) 485 : cluster [DBG] pgmap v385: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:28:00.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:00 vm04 bash[19581]: audit 2026-03-09T14:27:58.862156+0000 mgr.x (mgr.14150) 484 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:00.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:00 vm04 bash[19581]: audit 2026-03-09T14:27:58.862156+0000 mgr.x (mgr.14150) 484 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:00.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:00 vm04 bash[19581]: cluster 2026-03-09T14:27:58.993570+0000 mgr.x (mgr.14150) 485 : cluster [DBG] pgmap v385: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:28:00.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:00 vm04 bash[19581]: cluster 2026-03-09T14:27:58.993570+0000 mgr.x (mgr.14150) 485 : cluster [DBG] pgmap v385: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:28:00.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:00 vm05 bash[20070]: audit 2026-03-09T14:27:58.862156+0000 mgr.x (mgr.14150) 484 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:00.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:00 vm05 bash[20070]: audit 2026-03-09T14:27:58.862156+0000 mgr.x (mgr.14150) 484 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:00.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:00 vm05 bash[20070]: cluster 2026-03-09T14:27:58.993570+0000 mgr.x (mgr.14150) 485 : cluster [DBG] pgmap v385: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:28:00.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:00 vm05 bash[20070]: cluster 2026-03-09T14:27:58.993570+0000 mgr.x (mgr.14150) 485 : cluster [DBG] pgmap v385: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:28:02.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:02 vm03 bash[17524]: cluster 2026-03-09T14:28:00.993837+0000 mgr.x (mgr.14150) 486 : cluster [DBG] pgmap v386: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:28:02.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:02 vm03 bash[17524]: cluster 2026-03-09T14:28:00.993837+0000 mgr.x (mgr.14150) 486 : cluster [DBG] pgmap v386: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:28:02.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:02 vm04 bash[19581]: cluster 2026-03-09T14:28:00.993837+0000 mgr.x (mgr.14150) 486 : cluster [DBG] pgmap v386: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:28:02.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:02 vm04 bash[19581]: cluster 2026-03-09T14:28:00.993837+0000 mgr.x (mgr.14150) 486 : cluster [DBG] pgmap v386: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:28:02.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:02 vm05 bash[20070]: cluster 2026-03-09T14:28:00.993837+0000 mgr.x (mgr.14150) 486 : cluster [DBG] pgmap v386: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:28:02.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:02 vm05 bash[20070]: cluster 2026-03-09T14:28:00.993837+0000 mgr.x (mgr.14150) 486 : cluster [DBG] pgmap v386: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:28:04.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:04 vm03 bash[17524]: cluster 2026-03-09T14:28:02.994043+0000 mgr.x (mgr.14150) 487 : cluster [DBG] pgmap v387: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:28:04.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:04 vm03 bash[17524]: cluster 2026-03-09T14:28:02.994043+0000 mgr.x (mgr.14150) 487 : cluster [DBG] pgmap v387: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:28:04.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:04 vm04 bash[19581]: cluster 2026-03-09T14:28:02.994043+0000 mgr.x (mgr.14150) 487 : cluster [DBG] pgmap v387: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:28:04.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:04 vm04 bash[19581]: cluster 2026-03-09T14:28:02.994043+0000 mgr.x (mgr.14150) 487 : cluster [DBG] pgmap v387: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:28:04.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:04 vm05 bash[20070]: cluster 2026-03-09T14:28:02.994043+0000 mgr.x (mgr.14150) 487 : cluster [DBG] pgmap v387: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:28:04.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:04 vm05 bash[20070]: cluster 2026-03-09T14:28:02.994043+0000 mgr.x (mgr.14150) 487 : cluster [DBG] pgmap v387: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:28:06.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:06 vm03 bash[17524]: cluster 2026-03-09T14:28:04.994296+0000 mgr.x (mgr.14150) 488 : cluster [DBG] pgmap v388: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:28:06.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:06 vm03 bash[17524]: cluster 2026-03-09T14:28:04.994296+0000 mgr.x (mgr.14150) 488 : cluster [DBG] pgmap v388: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:28:06.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:06 vm04 bash[19581]: cluster 2026-03-09T14:28:04.994296+0000 mgr.x (mgr.14150) 488 : cluster [DBG] pgmap v388: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:28:06.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:06 vm04 bash[19581]: cluster 2026-03-09T14:28:04.994296+0000 mgr.x (mgr.14150) 488 : cluster [DBG] pgmap v388: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:28:06.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:06 vm05 bash[20070]: cluster 2026-03-09T14:28:04.994296+0000 mgr.x (mgr.14150) 488 : cluster [DBG] pgmap v388: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:28:06.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:06 vm05 bash[20070]: cluster 2026-03-09T14:28:04.994296+0000 mgr.x (mgr.14150) 488 : cluster [DBG] pgmap v388: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:28:08.279 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:07 vm03 bash[37744]: debug there is no tcmu-runner data available 2026-03-09T14:28:08.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:08 vm03 bash[17524]: cluster 2026-03-09T14:28:06.994536+0000 mgr.x (mgr.14150) 489 : cluster [DBG] pgmap v389: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:28:08.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:08 vm03 bash[17524]: cluster 2026-03-09T14:28:06.994536+0000 mgr.x (mgr.14150) 489 : cluster [DBG] pgmap v389: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:28:08.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:08 vm04 bash[19581]: cluster 2026-03-09T14:28:06.994536+0000 mgr.x (mgr.14150) 489 : cluster [DBG] pgmap v389: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:28:08.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:08 vm04 bash[19581]: cluster 2026-03-09T14:28:06.994536+0000 mgr.x (mgr.14150) 489 : cluster [DBG] pgmap v389: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:28:08.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:08 vm05 bash[20070]: cluster 2026-03-09T14:28:06.994536+0000 mgr.x (mgr.14150) 489 : cluster [DBG] pgmap v389: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:28:08.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:08 vm05 bash[20070]: cluster 2026-03-09T14:28:06.994536+0000 mgr.x (mgr.14150) 489 : cluster [DBG] pgmap v389: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:28:09.255 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:08 vm05 bash[38699]: debug there is no tcmu-runner data available 2026-03-09T14:28:09.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:09 vm03 bash[17524]: audit 2026-03-09T14:28:07.899222+0000 mgr.x (mgr.14150) 490 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:09.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:09 vm03 bash[17524]: audit 2026-03-09T14:28:07.899222+0000 mgr.x (mgr.14150) 490 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:09.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:09 vm04 bash[19581]: audit 2026-03-09T14:28:07.899222+0000 mgr.x (mgr.14150) 490 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:09.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:09 vm04 bash[19581]: audit 2026-03-09T14:28:07.899222+0000 mgr.x (mgr.14150) 490 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:09.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:09 vm05 bash[20070]: audit 2026-03-09T14:28:07.899222+0000 mgr.x (mgr.14150) 490 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:09.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:09 vm05 bash[20070]: audit 2026-03-09T14:28:07.899222+0000 mgr.x (mgr.14150) 490 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:10.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:10 vm03 bash[17524]: audit 2026-03-09T14:28:08.869863+0000 mgr.x (mgr.14150) 491 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:10.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:10 vm03 bash[17524]: audit 2026-03-09T14:28:08.869863+0000 mgr.x (mgr.14150) 491 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:10.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:10 vm03 bash[17524]: cluster 2026-03-09T14:28:08.994791+0000 mgr.x (mgr.14150) 492 : cluster [DBG] pgmap v390: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:28:10.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:10 vm03 bash[17524]: cluster 2026-03-09T14:28:08.994791+0000 mgr.x (mgr.14150) 492 : cluster [DBG] pgmap v390: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:28:10.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:10 vm04 bash[19581]: audit 2026-03-09T14:28:08.869863+0000 mgr.x (mgr.14150) 491 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:10.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:10 vm04 bash[19581]: audit 2026-03-09T14:28:08.869863+0000 mgr.x (mgr.14150) 491 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:10.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:10 vm04 bash[19581]: cluster 2026-03-09T14:28:08.994791+0000 mgr.x (mgr.14150) 492 : cluster [DBG] pgmap v390: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:28:10.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:10 vm04 bash[19581]: cluster 2026-03-09T14:28:08.994791+0000 mgr.x (mgr.14150) 492 : cluster [DBG] pgmap v390: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:28:10.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:10 vm05 bash[20070]: audit 2026-03-09T14:28:08.869863+0000 mgr.x (mgr.14150) 491 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:10.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:10 vm05 bash[20070]: audit 2026-03-09T14:28:08.869863+0000 mgr.x (mgr.14150) 491 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:10.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:10 vm05 bash[20070]: cluster 2026-03-09T14:28:08.994791+0000 mgr.x (mgr.14150) 492 : cluster [DBG] pgmap v390: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:28:10.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:10 vm05 bash[20070]: cluster 2026-03-09T14:28:08.994791+0000 mgr.x (mgr.14150) 492 : cluster [DBG] pgmap v390: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:28:12.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:12 vm03 bash[17524]: cluster 2026-03-09T14:28:10.995038+0000 mgr.x (mgr.14150) 493 : cluster [DBG] pgmap v391: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:28:12.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:12 vm03 bash[17524]: cluster 2026-03-09T14:28:10.995038+0000 mgr.x (mgr.14150) 493 : cluster [DBG] pgmap v391: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:28:12.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:12 vm04 bash[19581]: cluster 2026-03-09T14:28:10.995038+0000 mgr.x (mgr.14150) 493 : cluster [DBG] pgmap v391: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:28:12.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:12 vm04 bash[19581]: cluster 2026-03-09T14:28:10.995038+0000 mgr.x (mgr.14150) 493 : cluster [DBG] pgmap v391: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:28:12.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:12 vm05 bash[20070]: cluster 2026-03-09T14:28:10.995038+0000 mgr.x (mgr.14150) 493 : cluster [DBG] pgmap v391: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:28:12.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:12 vm05 bash[20070]: cluster 2026-03-09T14:28:10.995038+0000 mgr.x (mgr.14150) 493 : cluster [DBG] pgmap v391: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:28:14.459 INFO:teuthology.orchestra.run.vm05.stderr:Note: switching to '569c3e99c9b32a51b4eaf08731c728f4513ed589'. 2026-03-09T14:28:14.459 INFO:teuthology.orchestra.run.vm05.stderr: 2026-03-09T14:28:14.459 INFO:teuthology.orchestra.run.vm05.stderr:You are in 'detached HEAD' state. You can look around, make experimental 2026-03-09T14:28:14.459 INFO:teuthology.orchestra.run.vm05.stderr:changes and commit them, and you can discard any commits you make in this 2026-03-09T14:28:14.459 INFO:teuthology.orchestra.run.vm05.stderr:state without impacting any branches by switching back to a branch. 2026-03-09T14:28:14.459 INFO:teuthology.orchestra.run.vm05.stderr: 2026-03-09T14:28:14.459 INFO:teuthology.orchestra.run.vm05.stderr:If you want to create a new branch to retain commits you create, you may 2026-03-09T14:28:14.459 INFO:teuthology.orchestra.run.vm05.stderr:do so (now or later) by using -c with the switch command. Example: 2026-03-09T14:28:14.459 INFO:teuthology.orchestra.run.vm05.stderr: 2026-03-09T14:28:14.459 INFO:teuthology.orchestra.run.vm05.stderr: git switch -c 2026-03-09T14:28:14.459 INFO:teuthology.orchestra.run.vm05.stderr: 2026-03-09T14:28:14.459 INFO:teuthology.orchestra.run.vm05.stderr:Or undo this operation with: 2026-03-09T14:28:14.459 INFO:teuthology.orchestra.run.vm05.stderr: 2026-03-09T14:28:14.459 INFO:teuthology.orchestra.run.vm05.stderr: git switch - 2026-03-09T14:28:14.459 INFO:teuthology.orchestra.run.vm05.stderr: 2026-03-09T14:28:14.460 INFO:teuthology.orchestra.run.vm05.stderr:Turn off this advice by setting config variable advice.detachedHead to false 2026-03-09T14:28:14.460 INFO:teuthology.orchestra.run.vm05.stderr: 2026-03-09T14:28:14.460 INFO:teuthology.orchestra.run.vm05.stderr:HEAD is now at 569c3e99c9b qa/rgw: bucket notifications use pynose 2026-03-09T14:28:14.465 DEBUG:teuthology.orchestra.run.vm05:> cp -- /home/ubuntu/cephtest/clone.client.2/src/test/cli-integration/rbd/gwcli_delete.t /home/ubuntu/cephtest/archive/cram.client.2 2026-03-09T14:28:14.512 INFO:tasks.cram:Running tests for client.0... 2026-03-09T14:28:14.512 DEBUG:teuthology.orchestra.run.vm03:> CEPH_REF=master CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /home/ubuntu/cephtest/virtualenv/bin/cram -v -- /home/ubuntu/cephtest/archive/cram.client.0/*.t 2026-03-09T14:28:14.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:14 vm05 bash[20070]: cluster 2026-03-09T14:28:12.995258+0000 mgr.x (mgr.14150) 494 : cluster [DBG] pgmap v392: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:28:14.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:14 vm05 bash[20070]: cluster 2026-03-09T14:28:12.995258+0000 mgr.x (mgr.14150) 494 : cluster [DBG] pgmap v392: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:28:14.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:14 vm04 bash[19581]: cluster 2026-03-09T14:28:12.995258+0000 mgr.x (mgr.14150) 494 : cluster [DBG] pgmap v392: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:28:14.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:14 vm04 bash[19581]: cluster 2026-03-09T14:28:12.995258+0000 mgr.x (mgr.14150) 494 : cluster [DBG] pgmap v392: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:28:14.801 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:14 vm03 bash[17524]: cluster 2026-03-09T14:28:12.995258+0000 mgr.x (mgr.14150) 494 : cluster [DBG] pgmap v392: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:28:14.801 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:14 vm03 bash[17524]: cluster 2026-03-09T14:28:12.995258+0000 mgr.x (mgr.14150) 494 : cluster [DBG] pgmap v392: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:28:15.261 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:14 vm03 bash[37744]: debug ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:14] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:15.261 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:14 vm03 bash[37744]: ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:14] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:15.551 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:15 vm03 bash[37744]: debug ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:15] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:15.551 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:15 vm03 bash[37744]: ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:15] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:15.551 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:15 vm03 bash[37744]: debug ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:15] "GET /api/_ping HTTP/1.1" 200 - 2026-03-09T14:28:15.551 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:15 vm03 bash[37744]: ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:15] "GET /api/_ping HTTP/1.1" 200 - 2026-03-09T14:28:15.551 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:15 vm03 bash[37744]: debug (LUN.allocate) created datapool/block0 successfully 2026-03-09T14:28:15.551 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:15 vm03 bash[37744]: debug (LUN.add_dev_to_lio) Adding image 'datapool/block0' to LIO backstore user:rbd 2026-03-09T14:28:15.551 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:15 vm03 bash[37744]: debug failed to add datapool/block0 to LIO - error(Could not create _Backstore in configFS) 2026-03-09T14:28:15.551 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:15 vm03 bash[37744]: debug LUN alloc problem - failed to add datapool/block0 to LIO - error(Could not create _Backstore in configFS) 2026-03-09T14:28:15.551 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:15 vm03 bash[37744]: debug ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:15] "PUT /api/_disk/datapool/block0 HTTP/1.1" 500 - 2026-03-09T14:28:15.551 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:15 vm03 bash[37744]: ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:15] "PUT /api/_disk/datapool/block0 HTTP/1.1" 500 - 2026-03-09T14:28:15.551 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:15 vm03 bash[37744]: debug _disk change on localhost failed with 500 2026-03-09T14:28:15.551 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:15 vm03 bash[37744]: debug ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:15] "PUT /api/disk/datapool/block0 HTTP/1.1" 500 - 2026-03-09T14:28:15.551 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:15 vm03 bash[37744]: ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:15] "PUT /api/disk/datapool/block0 HTTP/1.1" 500 - 2026-03-09T14:28:15.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:15 vm03 bash[17524]: audit 2026-03-09T14:28:14.852860+0000 mon.a (mon.0) 760 : audit [DBG] from='client.? 192.168.123.103:0/2276782700' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:15.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:15 vm03 bash[17524]: audit 2026-03-09T14:28:14.852860+0000 mon.a (mon.0) 760 : audit [DBG] from='client.? 192.168.123.103:0/2276782700' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:15.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:15 vm03 bash[17524]: audit 2026-03-09T14:28:14.868796+0000 mon.a (mon.0) 761 : audit [DBG] from='client.? 192.168.123.103:0/2220264852' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:15.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:15 vm03 bash[17524]: audit 2026-03-09T14:28:14.868796+0000 mon.a (mon.0) 761 : audit [DBG] from='client.? 192.168.123.103:0/2220264852' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:15.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:15 vm03 bash[17524]: audit 2026-03-09T14:28:14.875096+0000 mon.a (mon.0) 762 : audit [DBG] from='client.? 192.168.123.103:0/509353229' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:15.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:15 vm03 bash[17524]: audit 2026-03-09T14:28:14.875096+0000 mon.a (mon.0) 762 : audit [DBG] from='client.? 192.168.123.103:0/509353229' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:15.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:15 vm03 bash[17524]: audit 2026-03-09T14:28:14.912957+0000 mon.a (mon.0) 763 : audit [DBG] from='client.? 192.168.123.103:0/4129771511' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:15.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:15 vm03 bash[17524]: audit 2026-03-09T14:28:14.912957+0000 mon.a (mon.0) 763 : audit [DBG] from='client.? 192.168.123.103:0/4129771511' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:15.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:15 vm03 bash[17524]: audit 2026-03-09T14:28:14.919521+0000 mon.c (mon.1) 26 : audit [DBG] from='client.? 192.168.123.103:0/3241133922' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:15.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:15 vm03 bash[17524]: audit 2026-03-09T14:28:14.919521+0000 mon.c (mon.1) 26 : audit [DBG] from='client.? 192.168.123.103:0/3241133922' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:15.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:15 vm03 bash[17524]: audit 2026-03-09T14:28:15.235242+0000 mon.b (mon.2) 25 : audit [DBG] from='client.? 192.168.123.103:0/3590383886' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:15.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:15 vm03 bash[17524]: audit 2026-03-09T14:28:15.235242+0000 mon.b (mon.2) 25 : audit [DBG] from='client.? 192.168.123.103:0/3590383886' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:15.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:15 vm03 bash[17524]: audit 2026-03-09T14:28:15.251746+0000 mon.a (mon.0) 764 : audit [DBG] from='client.? 192.168.123.103:0/2083069731' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:15.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:15 vm03 bash[17524]: audit 2026-03-09T14:28:15.251746+0000 mon.a (mon.0) 764 : audit [DBG] from='client.? 192.168.123.103:0/2083069731' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:15.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:15 vm03 bash[17524]: audit 2026-03-09T14:28:15.259097+0000 mon.a (mon.0) 765 : audit [DBG] from='client.? 192.168.123.103:0/2052743092' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:15.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:15 vm03 bash[17524]: audit 2026-03-09T14:28:15.259097+0000 mon.a (mon.0) 765 : audit [DBG] from='client.? 192.168.123.103:0/2052743092' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:15.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:15 vm03 bash[17524]: audit 2026-03-09T14:28:15.297516+0000 mon.a (mon.0) 766 : audit [DBG] from='client.? 192.168.123.103:0/4277707415' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:15.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:15 vm03 bash[17524]: audit 2026-03-09T14:28:15.297516+0000 mon.a (mon.0) 766 : audit [DBG] from='client.? 192.168.123.103:0/4277707415' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:15.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:15 vm03 bash[17524]: audit 2026-03-09T14:28:15.303729+0000 mon.c (mon.1) 27 : audit [DBG] from='client.? 192.168.123.103:0/1500158687' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:15.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:15 vm03 bash[17524]: audit 2026-03-09T14:28:15.303729+0000 mon.c (mon.1) 27 : audit [DBG] from='client.? 192.168.123.103:0/1500158687' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:15.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:15 vm05 bash[20070]: audit 2026-03-09T14:28:14.852860+0000 mon.a (mon.0) 760 : audit [DBG] from='client.? 192.168.123.103:0/2276782700' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:15.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:15 vm05 bash[20070]: audit 2026-03-09T14:28:14.852860+0000 mon.a (mon.0) 760 : audit [DBG] from='client.? 192.168.123.103:0/2276782700' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:15.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:15 vm05 bash[20070]: audit 2026-03-09T14:28:14.868796+0000 mon.a (mon.0) 761 : audit [DBG] from='client.? 192.168.123.103:0/2220264852' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:15.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:15 vm05 bash[20070]: audit 2026-03-09T14:28:14.868796+0000 mon.a (mon.0) 761 : audit [DBG] from='client.? 192.168.123.103:0/2220264852' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:15.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:15 vm05 bash[20070]: audit 2026-03-09T14:28:14.875096+0000 mon.a (mon.0) 762 : audit [DBG] from='client.? 192.168.123.103:0/509353229' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:15.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:15 vm05 bash[20070]: audit 2026-03-09T14:28:14.875096+0000 mon.a (mon.0) 762 : audit [DBG] from='client.? 192.168.123.103:0/509353229' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:15.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:15 vm05 bash[20070]: audit 2026-03-09T14:28:14.912957+0000 mon.a (mon.0) 763 : audit [DBG] from='client.? 192.168.123.103:0/4129771511' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:15.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:15 vm05 bash[20070]: audit 2026-03-09T14:28:14.912957+0000 mon.a (mon.0) 763 : audit [DBG] from='client.? 192.168.123.103:0/4129771511' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:15.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:15 vm05 bash[20070]: audit 2026-03-09T14:28:14.919521+0000 mon.c (mon.1) 26 : audit [DBG] from='client.? 192.168.123.103:0/3241133922' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:15.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:15 vm05 bash[20070]: audit 2026-03-09T14:28:14.919521+0000 mon.c (mon.1) 26 : audit [DBG] from='client.? 192.168.123.103:0/3241133922' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:15.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:15 vm05 bash[20070]: audit 2026-03-09T14:28:15.235242+0000 mon.b (mon.2) 25 : audit [DBG] from='client.? 192.168.123.103:0/3590383886' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:15.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:15 vm05 bash[20070]: audit 2026-03-09T14:28:15.235242+0000 mon.b (mon.2) 25 : audit [DBG] from='client.? 192.168.123.103:0/3590383886' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:15.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:15 vm05 bash[20070]: audit 2026-03-09T14:28:15.251746+0000 mon.a (mon.0) 764 : audit [DBG] from='client.? 192.168.123.103:0/2083069731' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:15.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:15 vm05 bash[20070]: audit 2026-03-09T14:28:15.251746+0000 mon.a (mon.0) 764 : audit [DBG] from='client.? 192.168.123.103:0/2083069731' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:15.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:15 vm05 bash[20070]: audit 2026-03-09T14:28:15.259097+0000 mon.a (mon.0) 765 : audit [DBG] from='client.? 192.168.123.103:0/2052743092' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:15.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:15 vm05 bash[20070]: audit 2026-03-09T14:28:15.259097+0000 mon.a (mon.0) 765 : audit [DBG] from='client.? 192.168.123.103:0/2052743092' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:15.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:15 vm05 bash[20070]: audit 2026-03-09T14:28:15.297516+0000 mon.a (mon.0) 766 : audit [DBG] from='client.? 192.168.123.103:0/4277707415' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:15.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:15 vm05 bash[20070]: audit 2026-03-09T14:28:15.297516+0000 mon.a (mon.0) 766 : audit [DBG] from='client.? 192.168.123.103:0/4277707415' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:15.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:15 vm05 bash[20070]: audit 2026-03-09T14:28:15.303729+0000 mon.c (mon.1) 27 : audit [DBG] from='client.? 192.168.123.103:0/1500158687' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:15.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:15 vm05 bash[20070]: audit 2026-03-09T14:28:15.303729+0000 mon.c (mon.1) 27 : audit [DBG] from='client.? 192.168.123.103:0/1500158687' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:15.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:15 vm04 bash[19581]: audit 2026-03-09T14:28:14.852860+0000 mon.a (mon.0) 760 : audit [DBG] from='client.? 192.168.123.103:0/2276782700' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:15.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:15 vm04 bash[19581]: audit 2026-03-09T14:28:14.852860+0000 mon.a (mon.0) 760 : audit [DBG] from='client.? 192.168.123.103:0/2276782700' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:15.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:15 vm04 bash[19581]: audit 2026-03-09T14:28:14.868796+0000 mon.a (mon.0) 761 : audit [DBG] from='client.? 192.168.123.103:0/2220264852' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:15.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:15 vm04 bash[19581]: audit 2026-03-09T14:28:14.868796+0000 mon.a (mon.0) 761 : audit [DBG] from='client.? 192.168.123.103:0/2220264852' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:15.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:15 vm04 bash[19581]: audit 2026-03-09T14:28:14.875096+0000 mon.a (mon.0) 762 : audit [DBG] from='client.? 192.168.123.103:0/509353229' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:15.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:15 vm04 bash[19581]: audit 2026-03-09T14:28:14.875096+0000 mon.a (mon.0) 762 : audit [DBG] from='client.? 192.168.123.103:0/509353229' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:15.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:15 vm04 bash[19581]: audit 2026-03-09T14:28:14.912957+0000 mon.a (mon.0) 763 : audit [DBG] from='client.? 192.168.123.103:0/4129771511' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:15.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:15 vm04 bash[19581]: audit 2026-03-09T14:28:14.912957+0000 mon.a (mon.0) 763 : audit [DBG] from='client.? 192.168.123.103:0/4129771511' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:15.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:15 vm04 bash[19581]: audit 2026-03-09T14:28:14.919521+0000 mon.c (mon.1) 26 : audit [DBG] from='client.? 192.168.123.103:0/3241133922' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:15.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:15 vm04 bash[19581]: audit 2026-03-09T14:28:14.919521+0000 mon.c (mon.1) 26 : audit [DBG] from='client.? 192.168.123.103:0/3241133922' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:15.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:15 vm04 bash[19581]: audit 2026-03-09T14:28:15.235242+0000 mon.b (mon.2) 25 : audit [DBG] from='client.? 192.168.123.103:0/3590383886' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:15.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:15 vm04 bash[19581]: audit 2026-03-09T14:28:15.235242+0000 mon.b (mon.2) 25 : audit [DBG] from='client.? 192.168.123.103:0/3590383886' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:15.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:15 vm04 bash[19581]: audit 2026-03-09T14:28:15.251746+0000 mon.a (mon.0) 764 : audit [DBG] from='client.? 192.168.123.103:0/2083069731' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:15.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:15 vm04 bash[19581]: audit 2026-03-09T14:28:15.251746+0000 mon.a (mon.0) 764 : audit [DBG] from='client.? 192.168.123.103:0/2083069731' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:15.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:15 vm04 bash[19581]: audit 2026-03-09T14:28:15.259097+0000 mon.a (mon.0) 765 : audit [DBG] from='client.? 192.168.123.103:0/2052743092' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:15.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:15 vm04 bash[19581]: audit 2026-03-09T14:28:15.259097+0000 mon.a (mon.0) 765 : audit [DBG] from='client.? 192.168.123.103:0/2052743092' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:15.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:15 vm04 bash[19581]: audit 2026-03-09T14:28:15.297516+0000 mon.a (mon.0) 766 : audit [DBG] from='client.? 192.168.123.103:0/4277707415' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:15.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:15 vm04 bash[19581]: audit 2026-03-09T14:28:15.297516+0000 mon.a (mon.0) 766 : audit [DBG] from='client.? 192.168.123.103:0/4277707415' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:15.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:15 vm04 bash[19581]: audit 2026-03-09T14:28:15.303729+0000 mon.c (mon.1) 27 : audit [DBG] from='client.? 192.168.123.103:0/1500158687' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:15.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:15 vm04 bash[19581]: audit 2026-03-09T14:28:15.303729+0000 mon.c (mon.1) 27 : audit [DBG] from='client.? 192.168.123.103:0/1500158687' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:16.051 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:15 vm03 bash[37744]: debug ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:15] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:16.051 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:15 vm03 bash[37744]: ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:15] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:16.488 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:16 vm03 bash[37744]: debug ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:16] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:16.488 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:16 vm03 bash[37744]: ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:16] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:16.488 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:16 vm03 bash[17524]: cluster 2026-03-09T14:28:14.995498+0000 mgr.x (mgr.14150) 495 : cluster [DBG] pgmap v393: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:28:16.488 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:16 vm03 bash[17524]: cluster 2026-03-09T14:28:14.995498+0000 mgr.x (mgr.14150) 495 : cluster [DBG] pgmap v393: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:28:16.488 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:16 vm03 bash[17524]: audit 2026-03-09T14:28:15.699362+0000 mon.a (mon.0) 767 : audit [DBG] from='client.? 192.168.123.103:0/3976120618' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:16.488 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:16 vm03 bash[17524]: audit 2026-03-09T14:28:15.699362+0000 mon.a (mon.0) 767 : audit [DBG] from='client.? 192.168.123.103:0/3976120618' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:16.488 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:16 vm03 bash[17524]: audit 2026-03-09T14:28:15.715631+0000 mon.a (mon.0) 768 : audit [DBG] from='client.? 192.168.123.103:0/257564899' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:16.488 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:16 vm03 bash[17524]: audit 2026-03-09T14:28:15.715631+0000 mon.a (mon.0) 768 : audit [DBG] from='client.? 192.168.123.103:0/257564899' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:16.488 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:16 vm03 bash[17524]: audit 2026-03-09T14:28:15.723028+0000 mon.b (mon.2) 26 : audit [DBG] from='client.? 192.168.123.103:0/587977586' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:16.488 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:16 vm03 bash[17524]: audit 2026-03-09T14:28:15.723028+0000 mon.b (mon.2) 26 : audit [DBG] from='client.? 192.168.123.103:0/587977586' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:16.488 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:16 vm03 bash[17524]: audit 2026-03-09T14:28:15.760565+0000 mon.a (mon.0) 769 : audit [DBG] from='client.? 192.168.123.103:0/46638508' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:16.488 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:16 vm03 bash[17524]: audit 2026-03-09T14:28:15.760565+0000 mon.a (mon.0) 769 : audit [DBG] from='client.? 192.168.123.103:0/46638508' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:16.489 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:16 vm03 bash[17524]: audit 2026-03-09T14:28:15.766626+0000 mon.b (mon.2) 27 : audit [DBG] from='client.? 192.168.123.103:0/2846098218' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:16.489 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:16 vm03 bash[17524]: audit 2026-03-09T14:28:15.766626+0000 mon.b (mon.2) 27 : audit [DBG] from='client.? 192.168.123.103:0/2846098218' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:16.489 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:16 vm03 bash[17524]: audit 2026-03-09T14:28:16.085464+0000 mon.a (mon.0) 770 : audit [DBG] from='client.? 192.168.123.103:0/2474754420' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:16.489 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:16 vm03 bash[17524]: audit 2026-03-09T14:28:16.085464+0000 mon.a (mon.0) 770 : audit [DBG] from='client.? 192.168.123.103:0/2474754420' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:16.489 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:16 vm03 bash[17524]: audit 2026-03-09T14:28:16.101041+0000 mon.b (mon.2) 28 : audit [DBG] from='client.? 192.168.123.103:0/3831617627' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:16.489 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:16 vm03 bash[17524]: audit 2026-03-09T14:28:16.101041+0000 mon.b (mon.2) 28 : audit [DBG] from='client.? 192.168.123.103:0/3831617627' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:16.489 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:16 vm03 bash[17524]: audit 2026-03-09T14:28:16.107816+0000 mon.a (mon.0) 771 : audit [DBG] from='client.? 192.168.123.103:0/3060038364' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:16.489 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:16 vm03 bash[17524]: audit 2026-03-09T14:28:16.107816+0000 mon.a (mon.0) 771 : audit [DBG] from='client.? 192.168.123.103:0/3060038364' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:16.489 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:16 vm03 bash[17524]: audit 2026-03-09T14:28:16.144270+0000 mon.a (mon.0) 772 : audit [DBG] from='client.? 192.168.123.103:0/3081024820' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:16.489 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:16 vm03 bash[17524]: audit 2026-03-09T14:28:16.144270+0000 mon.a (mon.0) 772 : audit [DBG] from='client.? 192.168.123.103:0/3081024820' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:16.489 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:16 vm03 bash[17524]: audit 2026-03-09T14:28:16.149627+0000 mon.b (mon.2) 29 : audit [DBG] from='client.? 192.168.123.103:0/238664300' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:16.489 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:16 vm03 bash[17524]: audit 2026-03-09T14:28:16.149627+0000 mon.b (mon.2) 29 : audit [DBG] from='client.? 192.168.123.103:0/238664300' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:16.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:16 vm04 bash[19581]: cluster 2026-03-09T14:28:14.995498+0000 mgr.x (mgr.14150) 495 : cluster [DBG] pgmap v393: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:28:16.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:16 vm04 bash[19581]: cluster 2026-03-09T14:28:14.995498+0000 mgr.x (mgr.14150) 495 : cluster [DBG] pgmap v393: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:28:16.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:16 vm04 bash[19581]: audit 2026-03-09T14:28:15.699362+0000 mon.a (mon.0) 767 : audit [DBG] from='client.? 192.168.123.103:0/3976120618' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:16.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:16 vm04 bash[19581]: audit 2026-03-09T14:28:15.699362+0000 mon.a (mon.0) 767 : audit [DBG] from='client.? 192.168.123.103:0/3976120618' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:16.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:16 vm04 bash[19581]: audit 2026-03-09T14:28:15.715631+0000 mon.a (mon.0) 768 : audit [DBG] from='client.? 192.168.123.103:0/257564899' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:16.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:16 vm04 bash[19581]: audit 2026-03-09T14:28:15.715631+0000 mon.a (mon.0) 768 : audit [DBG] from='client.? 192.168.123.103:0/257564899' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:16.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:16 vm04 bash[19581]: audit 2026-03-09T14:28:15.723028+0000 mon.b (mon.2) 26 : audit [DBG] from='client.? 192.168.123.103:0/587977586' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:16.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:16 vm04 bash[19581]: audit 2026-03-09T14:28:15.723028+0000 mon.b (mon.2) 26 : audit [DBG] from='client.? 192.168.123.103:0/587977586' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:16.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:16 vm04 bash[19581]: audit 2026-03-09T14:28:15.760565+0000 mon.a (mon.0) 769 : audit [DBG] from='client.? 192.168.123.103:0/46638508' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:16.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:16 vm04 bash[19581]: audit 2026-03-09T14:28:15.760565+0000 mon.a (mon.0) 769 : audit [DBG] from='client.? 192.168.123.103:0/46638508' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:16.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:16 vm04 bash[19581]: audit 2026-03-09T14:28:15.766626+0000 mon.b (mon.2) 27 : audit [DBG] from='client.? 192.168.123.103:0/2846098218' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:16.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:16 vm04 bash[19581]: audit 2026-03-09T14:28:15.766626+0000 mon.b (mon.2) 27 : audit [DBG] from='client.? 192.168.123.103:0/2846098218' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:16.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:16 vm04 bash[19581]: audit 2026-03-09T14:28:16.085464+0000 mon.a (mon.0) 770 : audit [DBG] from='client.? 192.168.123.103:0/2474754420' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:16.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:16 vm04 bash[19581]: audit 2026-03-09T14:28:16.085464+0000 mon.a (mon.0) 770 : audit [DBG] from='client.? 192.168.123.103:0/2474754420' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:16.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:16 vm04 bash[19581]: audit 2026-03-09T14:28:16.101041+0000 mon.b (mon.2) 28 : audit [DBG] from='client.? 192.168.123.103:0/3831617627' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:16.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:16 vm04 bash[19581]: audit 2026-03-09T14:28:16.101041+0000 mon.b (mon.2) 28 : audit [DBG] from='client.? 192.168.123.103:0/3831617627' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:16.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:16 vm04 bash[19581]: audit 2026-03-09T14:28:16.107816+0000 mon.a (mon.0) 771 : audit [DBG] from='client.? 192.168.123.103:0/3060038364' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:16.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:16 vm04 bash[19581]: audit 2026-03-09T14:28:16.107816+0000 mon.a (mon.0) 771 : audit [DBG] from='client.? 192.168.123.103:0/3060038364' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:16.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:16 vm04 bash[19581]: audit 2026-03-09T14:28:16.144270+0000 mon.a (mon.0) 772 : audit [DBG] from='client.? 192.168.123.103:0/3081024820' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:16.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:16 vm04 bash[19581]: audit 2026-03-09T14:28:16.144270+0000 mon.a (mon.0) 772 : audit [DBG] from='client.? 192.168.123.103:0/3081024820' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:16.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:16 vm04 bash[19581]: audit 2026-03-09T14:28:16.149627+0000 mon.b (mon.2) 29 : audit [DBG] from='client.? 192.168.123.103:0/238664300' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:16.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:16 vm04 bash[19581]: audit 2026-03-09T14:28:16.149627+0000 mon.b (mon.2) 29 : audit [DBG] from='client.? 192.168.123.103:0/238664300' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:16.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:16 vm05 bash[20070]: cluster 2026-03-09T14:28:14.995498+0000 mgr.x (mgr.14150) 495 : cluster [DBG] pgmap v393: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:28:16.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:16 vm05 bash[20070]: cluster 2026-03-09T14:28:14.995498+0000 mgr.x (mgr.14150) 495 : cluster [DBG] pgmap v393: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-09T14:28:16.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:16 vm05 bash[20070]: audit 2026-03-09T14:28:15.699362+0000 mon.a (mon.0) 767 : audit [DBG] from='client.? 192.168.123.103:0/3976120618' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:16.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:16 vm05 bash[20070]: audit 2026-03-09T14:28:15.699362+0000 mon.a (mon.0) 767 : audit [DBG] from='client.? 192.168.123.103:0/3976120618' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:16.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:16 vm05 bash[20070]: audit 2026-03-09T14:28:15.715631+0000 mon.a (mon.0) 768 : audit [DBG] from='client.? 192.168.123.103:0/257564899' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:16.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:16 vm05 bash[20070]: audit 2026-03-09T14:28:15.715631+0000 mon.a (mon.0) 768 : audit [DBG] from='client.? 192.168.123.103:0/257564899' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:16.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:16 vm05 bash[20070]: audit 2026-03-09T14:28:15.723028+0000 mon.b (mon.2) 26 : audit [DBG] from='client.? 192.168.123.103:0/587977586' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:16.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:16 vm05 bash[20070]: audit 2026-03-09T14:28:15.723028+0000 mon.b (mon.2) 26 : audit [DBG] from='client.? 192.168.123.103:0/587977586' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:16.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:16 vm05 bash[20070]: audit 2026-03-09T14:28:15.760565+0000 mon.a (mon.0) 769 : audit [DBG] from='client.? 192.168.123.103:0/46638508' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:16.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:16 vm05 bash[20070]: audit 2026-03-09T14:28:15.760565+0000 mon.a (mon.0) 769 : audit [DBG] from='client.? 192.168.123.103:0/46638508' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:16.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:16 vm05 bash[20070]: audit 2026-03-09T14:28:15.766626+0000 mon.b (mon.2) 27 : audit [DBG] from='client.? 192.168.123.103:0/2846098218' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:16.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:16 vm05 bash[20070]: audit 2026-03-09T14:28:15.766626+0000 mon.b (mon.2) 27 : audit [DBG] from='client.? 192.168.123.103:0/2846098218' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:16.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:16 vm05 bash[20070]: audit 2026-03-09T14:28:16.085464+0000 mon.a (mon.0) 770 : audit [DBG] from='client.? 192.168.123.103:0/2474754420' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:16.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:16 vm05 bash[20070]: audit 2026-03-09T14:28:16.085464+0000 mon.a (mon.0) 770 : audit [DBG] from='client.? 192.168.123.103:0/2474754420' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:16.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:16 vm05 bash[20070]: audit 2026-03-09T14:28:16.101041+0000 mon.b (mon.2) 28 : audit [DBG] from='client.? 192.168.123.103:0/3831617627' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:16.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:16 vm05 bash[20070]: audit 2026-03-09T14:28:16.101041+0000 mon.b (mon.2) 28 : audit [DBG] from='client.? 192.168.123.103:0/3831617627' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:16.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:16 vm05 bash[20070]: audit 2026-03-09T14:28:16.107816+0000 mon.a (mon.0) 771 : audit [DBG] from='client.? 192.168.123.103:0/3060038364' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:16.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:16 vm05 bash[20070]: audit 2026-03-09T14:28:16.107816+0000 mon.a (mon.0) 771 : audit [DBG] from='client.? 192.168.123.103:0/3060038364' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:16.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:16 vm05 bash[20070]: audit 2026-03-09T14:28:16.144270+0000 mon.a (mon.0) 772 : audit [DBG] from='client.? 192.168.123.103:0/3081024820' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:16.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:16 vm05 bash[20070]: audit 2026-03-09T14:28:16.144270+0000 mon.a (mon.0) 772 : audit [DBG] from='client.? 192.168.123.103:0/3081024820' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:16.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:16 vm05 bash[20070]: audit 2026-03-09T14:28:16.149627+0000 mon.b (mon.2) 29 : audit [DBG] from='client.? 192.168.123.103:0/238664300' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:16.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:16 vm05 bash[20070]: audit 2026-03-09T14:28:16.149627+0000 mon.b (mon.2) 29 : audit [DBG] from='client.? 192.168.123.103:0/238664300' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:16.801 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:16 vm03 bash[37744]: debug ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:16] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:16.801 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:16 vm03 bash[37744]: ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:16] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:17.260 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:16 vm03 bash[37744]: debug ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:16] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:17.260 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:16 vm03 bash[37744]: ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:16] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:17.260 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:16 vm03 bash[37744]: debug Unable to create the Target definition - Could not create ISCSIFabricModule in configFS 2026-03-09T14:28:17.260 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:16 vm03 bash[37744]: debug Failure during gateway 'init' processing 2026-03-09T14:28:17.260 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:16 vm03 bash[37744]: debug ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:16] "PUT /api/target/iqn.2003-01.com.redhat.iscsi-gw:ceph-gw HTTP/1.1" 500 - 2026-03-09T14:28:17.260 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:16 vm03 bash[37744]: ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:16] "PUT /api/target/iqn.2003-01.com.redhat.iscsi-gw:ceph-gw HTTP/1.1" 500 - 2026-03-09T14:28:17.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:17 vm03 bash[17524]: audit 2026-03-09T14:28:16.464274+0000 mon.a (mon.0) 773 : audit [DBG] from='client.? 192.168.123.103:0/1173639795' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:17.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:17 vm03 bash[17524]: audit 2026-03-09T14:28:16.464274+0000 mon.a (mon.0) 773 : audit [DBG] from='client.? 192.168.123.103:0/1173639795' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:17.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:17 vm03 bash[17524]: audit 2026-03-09T14:28:16.480106+0000 mon.a (mon.0) 774 : audit [DBG] from='client.? 192.168.123.103:0/3298251898' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:17.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:17 vm03 bash[17524]: audit 2026-03-09T14:28:16.480106+0000 mon.a (mon.0) 774 : audit [DBG] from='client.? 192.168.123.103:0/3298251898' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:17.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:17 vm03 bash[17524]: audit 2026-03-09T14:28:16.486513+0000 mon.b (mon.2) 30 : audit [DBG] from='client.? 192.168.123.103:0/920526837' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:17.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:17 vm03 bash[17524]: audit 2026-03-09T14:28:16.486513+0000 mon.b (mon.2) 30 : audit [DBG] from='client.? 192.168.123.103:0/920526837' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:17.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:17 vm03 bash[17524]: audit 2026-03-09T14:28:16.522290+0000 mon.a (mon.0) 775 : audit [DBG] from='client.? 192.168.123.103:0/1008684446' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:17.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:17 vm03 bash[17524]: audit 2026-03-09T14:28:16.522290+0000 mon.a (mon.0) 775 : audit [DBG] from='client.? 192.168.123.103:0/1008684446' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:17.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:17 vm03 bash[17524]: audit 2026-03-09T14:28:16.528697+0000 mon.c (mon.1) 28 : audit [DBG] from='client.? 192.168.123.103:0/1417975734' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:17.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:17 vm03 bash[17524]: audit 2026-03-09T14:28:16.528697+0000 mon.c (mon.1) 28 : audit [DBG] from='client.? 192.168.123.103:0/1417975734' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:17.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:17 vm03 bash[17524]: audit 2026-03-09T14:28:16.841684+0000 mon.a (mon.0) 776 : audit [DBG] from='client.? 192.168.123.103:0/18665939' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:17.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:17 vm03 bash[17524]: audit 2026-03-09T14:28:16.841684+0000 mon.a (mon.0) 776 : audit [DBG] from='client.? 192.168.123.103:0/18665939' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:17.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:17 vm03 bash[17524]: audit 2026-03-09T14:28:16.856723+0000 mon.a (mon.0) 777 : audit [DBG] from='client.? 192.168.123.103:0/3344186827' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:17.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:17 vm03 bash[17524]: audit 2026-03-09T14:28:16.856723+0000 mon.a (mon.0) 777 : audit [DBG] from='client.? 192.168.123.103:0/3344186827' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:17.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:17 vm03 bash[17524]: audit 2026-03-09T14:28:16.862981+0000 mon.a (mon.0) 778 : audit [DBG] from='client.? 192.168.123.103:0/2223073349' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:17.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:17 vm03 bash[17524]: audit 2026-03-09T14:28:16.862981+0000 mon.a (mon.0) 778 : audit [DBG] from='client.? 192.168.123.103:0/2223073349' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:17.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:17 vm03 bash[17524]: audit 2026-03-09T14:28:16.899578+0000 mon.c (mon.1) 29 : audit [DBG] from='client.? 192.168.123.103:0/697608380' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:17.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:17 vm03 bash[17524]: audit 2026-03-09T14:28:16.899578+0000 mon.c (mon.1) 29 : audit [DBG] from='client.? 192.168.123.103:0/697608380' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:17.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:17 vm03 bash[17524]: audit 2026-03-09T14:28:16.905314+0000 mon.a (mon.0) 779 : audit [DBG] from='client.? 192.168.123.103:0/3325218106' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:17.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:17 vm03 bash[17524]: audit 2026-03-09T14:28:16.905314+0000 mon.a (mon.0) 779 : audit [DBG] from='client.? 192.168.123.103:0/3325218106' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:17.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:17 vm03 bash[17524]: audit 2026-03-09T14:28:17.237238+0000 mon.c (mon.1) 30 : audit [DBG] from='client.? 192.168.123.103:0/4029136694' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:17.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:17 vm03 bash[17524]: audit 2026-03-09T14:28:17.237238+0000 mon.c (mon.1) 30 : audit [DBG] from='client.? 192.168.123.103:0/4029136694' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:17.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:17 vm03 bash[17524]: audit 2026-03-09T14:28:17.252403+0000 mon.a (mon.0) 780 : audit [DBG] from='client.? 192.168.123.103:0/388645800' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:17.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:17 vm03 bash[17524]: audit 2026-03-09T14:28:17.252403+0000 mon.a (mon.0) 780 : audit [DBG] from='client.? 192.168.123.103:0/388645800' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:17.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:17 vm03 bash[17524]: audit 2026-03-09T14:28:17.258976+0000 mon.a (mon.0) 781 : audit [DBG] from='client.? 192.168.123.103:0/359052952' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:17.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:17 vm03 bash[17524]: audit 2026-03-09T14:28:17.258976+0000 mon.a (mon.0) 781 : audit [DBG] from='client.? 192.168.123.103:0/359052952' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:17.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:17 vm03 bash[17524]: audit 2026-03-09T14:28:17.297664+0000 mon.a (mon.0) 782 : audit [DBG] from='client.? 192.168.123.103:0/585427865' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:17.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:17 vm03 bash[17524]: audit 2026-03-09T14:28:17.297664+0000 mon.a (mon.0) 782 : audit [DBG] from='client.? 192.168.123.103:0/585427865' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:17.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:17 vm03 bash[17524]: audit 2026-03-09T14:28:17.304073+0000 mon.a (mon.0) 783 : audit [DBG] from='client.? 192.168.123.103:0/3362186464' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:17.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:17 vm03 bash[17524]: audit 2026-03-09T14:28:17.304073+0000 mon.a (mon.0) 783 : audit [DBG] from='client.? 192.168.123.103:0/3362186464' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:17.551 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:17 vm03 bash[37744]: debug ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:17] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:17.551 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:17 vm03 bash[37744]: ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:17] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:17.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:17 vm04 bash[19581]: audit 2026-03-09T14:28:16.464274+0000 mon.a (mon.0) 773 : audit [DBG] from='client.? 192.168.123.103:0/1173639795' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:17.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:17 vm04 bash[19581]: audit 2026-03-09T14:28:16.464274+0000 mon.a (mon.0) 773 : audit [DBG] from='client.? 192.168.123.103:0/1173639795' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:17.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:17 vm04 bash[19581]: audit 2026-03-09T14:28:16.480106+0000 mon.a (mon.0) 774 : audit [DBG] from='client.? 192.168.123.103:0/3298251898' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:17.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:17 vm04 bash[19581]: audit 2026-03-09T14:28:16.480106+0000 mon.a (mon.0) 774 : audit [DBG] from='client.? 192.168.123.103:0/3298251898' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:17.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:17 vm04 bash[19581]: audit 2026-03-09T14:28:16.486513+0000 mon.b (mon.2) 30 : audit [DBG] from='client.? 192.168.123.103:0/920526837' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:17.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:17 vm04 bash[19581]: audit 2026-03-09T14:28:16.486513+0000 mon.b (mon.2) 30 : audit [DBG] from='client.? 192.168.123.103:0/920526837' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:17.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:17 vm04 bash[19581]: audit 2026-03-09T14:28:16.522290+0000 mon.a (mon.0) 775 : audit [DBG] from='client.? 192.168.123.103:0/1008684446' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:17.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:17 vm04 bash[19581]: audit 2026-03-09T14:28:16.522290+0000 mon.a (mon.0) 775 : audit [DBG] from='client.? 192.168.123.103:0/1008684446' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:17.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:17 vm04 bash[19581]: audit 2026-03-09T14:28:16.528697+0000 mon.c (mon.1) 28 : audit [DBG] from='client.? 192.168.123.103:0/1417975734' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:17.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:17 vm04 bash[19581]: audit 2026-03-09T14:28:16.528697+0000 mon.c (mon.1) 28 : audit [DBG] from='client.? 192.168.123.103:0/1417975734' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:17.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:17 vm04 bash[19581]: audit 2026-03-09T14:28:16.841684+0000 mon.a (mon.0) 776 : audit [DBG] from='client.? 192.168.123.103:0/18665939' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:17.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:17 vm04 bash[19581]: audit 2026-03-09T14:28:16.841684+0000 mon.a (mon.0) 776 : audit [DBG] from='client.? 192.168.123.103:0/18665939' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:17.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:17 vm04 bash[19581]: audit 2026-03-09T14:28:16.856723+0000 mon.a (mon.0) 777 : audit [DBG] from='client.? 192.168.123.103:0/3344186827' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:17.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:17 vm04 bash[19581]: audit 2026-03-09T14:28:16.856723+0000 mon.a (mon.0) 777 : audit [DBG] from='client.? 192.168.123.103:0/3344186827' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:17.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:17 vm04 bash[19581]: audit 2026-03-09T14:28:16.862981+0000 mon.a (mon.0) 778 : audit [DBG] from='client.? 192.168.123.103:0/2223073349' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:17.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:17 vm04 bash[19581]: audit 2026-03-09T14:28:16.862981+0000 mon.a (mon.0) 778 : audit [DBG] from='client.? 192.168.123.103:0/2223073349' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:17.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:17 vm04 bash[19581]: audit 2026-03-09T14:28:16.899578+0000 mon.c (mon.1) 29 : audit [DBG] from='client.? 192.168.123.103:0/697608380' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:17.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:17 vm04 bash[19581]: audit 2026-03-09T14:28:16.899578+0000 mon.c (mon.1) 29 : audit [DBG] from='client.? 192.168.123.103:0/697608380' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:17.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:17 vm04 bash[19581]: audit 2026-03-09T14:28:16.905314+0000 mon.a (mon.0) 779 : audit [DBG] from='client.? 192.168.123.103:0/3325218106' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:17.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:17 vm04 bash[19581]: audit 2026-03-09T14:28:16.905314+0000 mon.a (mon.0) 779 : audit [DBG] from='client.? 192.168.123.103:0/3325218106' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:17.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:17 vm04 bash[19581]: audit 2026-03-09T14:28:17.237238+0000 mon.c (mon.1) 30 : audit [DBG] from='client.? 192.168.123.103:0/4029136694' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:17.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:17 vm04 bash[19581]: audit 2026-03-09T14:28:17.237238+0000 mon.c (mon.1) 30 : audit [DBG] from='client.? 192.168.123.103:0/4029136694' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:17.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:17 vm04 bash[19581]: audit 2026-03-09T14:28:17.252403+0000 mon.a (mon.0) 780 : audit [DBG] from='client.? 192.168.123.103:0/388645800' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:17.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:17 vm04 bash[19581]: audit 2026-03-09T14:28:17.252403+0000 mon.a (mon.0) 780 : audit [DBG] from='client.? 192.168.123.103:0/388645800' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:17.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:17 vm04 bash[19581]: audit 2026-03-09T14:28:17.258976+0000 mon.a (mon.0) 781 : audit [DBG] from='client.? 192.168.123.103:0/359052952' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:17.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:17 vm04 bash[19581]: audit 2026-03-09T14:28:17.258976+0000 mon.a (mon.0) 781 : audit [DBG] from='client.? 192.168.123.103:0/359052952' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:17.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:17 vm04 bash[19581]: audit 2026-03-09T14:28:17.297664+0000 mon.a (mon.0) 782 : audit [DBG] from='client.? 192.168.123.103:0/585427865' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:17.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:17 vm04 bash[19581]: audit 2026-03-09T14:28:17.297664+0000 mon.a (mon.0) 782 : audit [DBG] from='client.? 192.168.123.103:0/585427865' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:17.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:17 vm04 bash[19581]: audit 2026-03-09T14:28:17.304073+0000 mon.a (mon.0) 783 : audit [DBG] from='client.? 192.168.123.103:0/3362186464' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:17.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:17 vm04 bash[19581]: audit 2026-03-09T14:28:17.304073+0000 mon.a (mon.0) 783 : audit [DBG] from='client.? 192.168.123.103:0/3362186464' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:17.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:17 vm05 bash[20070]: audit 2026-03-09T14:28:16.464274+0000 mon.a (mon.0) 773 : audit [DBG] from='client.? 192.168.123.103:0/1173639795' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:17.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:17 vm05 bash[20070]: audit 2026-03-09T14:28:16.464274+0000 mon.a (mon.0) 773 : audit [DBG] from='client.? 192.168.123.103:0/1173639795' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:17.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:17 vm05 bash[20070]: audit 2026-03-09T14:28:16.480106+0000 mon.a (mon.0) 774 : audit [DBG] from='client.? 192.168.123.103:0/3298251898' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:17.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:17 vm05 bash[20070]: audit 2026-03-09T14:28:16.480106+0000 mon.a (mon.0) 774 : audit [DBG] from='client.? 192.168.123.103:0/3298251898' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:17.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:17 vm05 bash[20070]: audit 2026-03-09T14:28:16.486513+0000 mon.b (mon.2) 30 : audit [DBG] from='client.? 192.168.123.103:0/920526837' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:17.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:17 vm05 bash[20070]: audit 2026-03-09T14:28:16.486513+0000 mon.b (mon.2) 30 : audit [DBG] from='client.? 192.168.123.103:0/920526837' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:17.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:17 vm05 bash[20070]: audit 2026-03-09T14:28:16.522290+0000 mon.a (mon.0) 775 : audit [DBG] from='client.? 192.168.123.103:0/1008684446' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:17.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:17 vm05 bash[20070]: audit 2026-03-09T14:28:16.522290+0000 mon.a (mon.0) 775 : audit [DBG] from='client.? 192.168.123.103:0/1008684446' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:17.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:17 vm05 bash[20070]: audit 2026-03-09T14:28:16.528697+0000 mon.c (mon.1) 28 : audit [DBG] from='client.? 192.168.123.103:0/1417975734' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:17.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:17 vm05 bash[20070]: audit 2026-03-09T14:28:16.528697+0000 mon.c (mon.1) 28 : audit [DBG] from='client.? 192.168.123.103:0/1417975734' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:17.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:17 vm05 bash[20070]: audit 2026-03-09T14:28:16.841684+0000 mon.a (mon.0) 776 : audit [DBG] from='client.? 192.168.123.103:0/18665939' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:17.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:17 vm05 bash[20070]: audit 2026-03-09T14:28:16.841684+0000 mon.a (mon.0) 776 : audit [DBG] from='client.? 192.168.123.103:0/18665939' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:17.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:17 vm05 bash[20070]: audit 2026-03-09T14:28:16.856723+0000 mon.a (mon.0) 777 : audit [DBG] from='client.? 192.168.123.103:0/3344186827' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:17.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:17 vm05 bash[20070]: audit 2026-03-09T14:28:16.856723+0000 mon.a (mon.0) 777 : audit [DBG] from='client.? 192.168.123.103:0/3344186827' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:17.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:17 vm05 bash[20070]: audit 2026-03-09T14:28:16.862981+0000 mon.a (mon.0) 778 : audit [DBG] from='client.? 192.168.123.103:0/2223073349' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:17.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:17 vm05 bash[20070]: audit 2026-03-09T14:28:16.862981+0000 mon.a (mon.0) 778 : audit [DBG] from='client.? 192.168.123.103:0/2223073349' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:17.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:17 vm05 bash[20070]: audit 2026-03-09T14:28:16.899578+0000 mon.c (mon.1) 29 : audit [DBG] from='client.? 192.168.123.103:0/697608380' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:17.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:17 vm05 bash[20070]: audit 2026-03-09T14:28:16.899578+0000 mon.c (mon.1) 29 : audit [DBG] from='client.? 192.168.123.103:0/697608380' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:17.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:17 vm05 bash[20070]: audit 2026-03-09T14:28:16.905314+0000 mon.a (mon.0) 779 : audit [DBG] from='client.? 192.168.123.103:0/3325218106' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:17.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:17 vm05 bash[20070]: audit 2026-03-09T14:28:16.905314+0000 mon.a (mon.0) 779 : audit [DBG] from='client.? 192.168.123.103:0/3325218106' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:17.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:17 vm05 bash[20070]: audit 2026-03-09T14:28:17.237238+0000 mon.c (mon.1) 30 : audit [DBG] from='client.? 192.168.123.103:0/4029136694' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:17.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:17 vm05 bash[20070]: audit 2026-03-09T14:28:17.237238+0000 mon.c (mon.1) 30 : audit [DBG] from='client.? 192.168.123.103:0/4029136694' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:17.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:17 vm05 bash[20070]: audit 2026-03-09T14:28:17.252403+0000 mon.a (mon.0) 780 : audit [DBG] from='client.? 192.168.123.103:0/388645800' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:17.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:17 vm05 bash[20070]: audit 2026-03-09T14:28:17.252403+0000 mon.a (mon.0) 780 : audit [DBG] from='client.? 192.168.123.103:0/388645800' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:17.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:17 vm05 bash[20070]: audit 2026-03-09T14:28:17.258976+0000 mon.a (mon.0) 781 : audit [DBG] from='client.? 192.168.123.103:0/359052952' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:17.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:17 vm05 bash[20070]: audit 2026-03-09T14:28:17.258976+0000 mon.a (mon.0) 781 : audit [DBG] from='client.? 192.168.123.103:0/359052952' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:17.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:17 vm05 bash[20070]: audit 2026-03-09T14:28:17.297664+0000 mon.a (mon.0) 782 : audit [DBG] from='client.? 192.168.123.103:0/585427865' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:17.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:17 vm05 bash[20070]: audit 2026-03-09T14:28:17.297664+0000 mon.a (mon.0) 782 : audit [DBG] from='client.? 192.168.123.103:0/585427865' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:17.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:17 vm05 bash[20070]: audit 2026-03-09T14:28:17.304073+0000 mon.a (mon.0) 783 : audit [DBG] from='client.? 192.168.123.103:0/3362186464' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:17.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:17 vm05 bash[20070]: audit 2026-03-09T14:28:17.304073+0000 mon.a (mon.0) 783 : audit [DBG] from='client.? 192.168.123.103:0/3362186464' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:17.907 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:17 vm03 bash[37744]: debug ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:17] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:17.907 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:17 vm03 bash[37744]: ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:17] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:18.013 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:17 vm03 bash[37744]: debug there is no tcmu-runner data available 2026-03-09T14:28:18.301 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:18 vm03 bash[37744]: debug ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:18] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:18.301 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:18 vm03 bash[37744]: ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:18] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:18.754 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:18 vm03 bash[37744]: debug ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:18] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:18.754 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:18 vm03 bash[37744]: ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:18] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:18.754 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:18 vm03 bash[17524]: cluster 2026-03-09T14:28:16.995778+0000 mgr.x (mgr.14150) 496 : cluster [DBG] pgmap v394: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:28:18.754 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:18 vm03 bash[17524]: cluster 2026-03-09T14:28:16.995778+0000 mgr.x (mgr.14150) 496 : cluster [DBG] pgmap v394: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:28:18.754 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:18 vm03 bash[17524]: audit 2026-03-09T14:28:17.624244+0000 mon.a (mon.0) 784 : audit [DBG] from='client.? 192.168.123.103:0/2131528170' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:18.754 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:18 vm03 bash[17524]: audit 2026-03-09T14:28:17.624244+0000 mon.a (mon.0) 784 : audit [DBG] from='client.? 192.168.123.103:0/2131528170' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:18.754 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:18 vm03 bash[17524]: audit 2026-03-09T14:28:17.638491+0000 mon.a (mon.0) 785 : audit [DBG] from='client.? 192.168.123.103:0/1390806493' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:18.754 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:18 vm03 bash[17524]: audit 2026-03-09T14:28:17.638491+0000 mon.a (mon.0) 785 : audit [DBG] from='client.? 192.168.123.103:0/1390806493' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:18.754 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:18 vm03 bash[17524]: audit 2026-03-09T14:28:17.645015+0000 mon.a (mon.0) 786 : audit [DBG] from='client.? 192.168.123.103:0/3518767582' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:18.754 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:18 vm03 bash[17524]: audit 2026-03-09T14:28:17.645015+0000 mon.a (mon.0) 786 : audit [DBG] from='client.? 192.168.123.103:0/3518767582' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:18.754 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:18 vm03 bash[17524]: audit 2026-03-09T14:28:17.682462+0000 mon.a (mon.0) 787 : audit [DBG] from='client.? 192.168.123.103:0/2603176440' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:18.754 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:18 vm03 bash[17524]: audit 2026-03-09T14:28:17.682462+0000 mon.a (mon.0) 787 : audit [DBG] from='client.? 192.168.123.103:0/2603176440' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:18.754 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:18 vm03 bash[17524]: audit 2026-03-09T14:28:17.688377+0000 mon.a (mon.0) 788 : audit [DBG] from='client.? 192.168.123.103:0/3152421694' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:18.754 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:18 vm03 bash[17524]: audit 2026-03-09T14:28:17.688377+0000 mon.a (mon.0) 788 : audit [DBG] from='client.? 192.168.123.103:0/3152421694' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:18.754 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:18 vm03 bash[17524]: audit 2026-03-09T14:28:17.990373+0000 mon.b (mon.2) 31 : audit [DBG] from='client.? 192.168.123.103:0/609470043' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:18.754 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:18 vm03 bash[17524]: audit 2026-03-09T14:28:17.990373+0000 mon.b (mon.2) 31 : audit [DBG] from='client.? 192.168.123.103:0/609470043' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:18.754 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:18 vm03 bash[17524]: audit 2026-03-09T14:28:18.005442+0000 mon.c (mon.1) 31 : audit [DBG] from='client.? 192.168.123.103:0/3082007635' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:18.754 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:18 vm03 bash[17524]: audit 2026-03-09T14:28:18.005442+0000 mon.c (mon.1) 31 : audit [DBG] from='client.? 192.168.123.103:0/3082007635' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:18.755 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:18 vm03 bash[17524]: audit 2026-03-09T14:28:18.011847+0000 mon.a (mon.0) 789 : audit [DBG] from='client.? 192.168.123.103:0/772416609' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:18.755 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:18 vm03 bash[17524]: audit 2026-03-09T14:28:18.011847+0000 mon.a (mon.0) 789 : audit [DBG] from='client.? 192.168.123.103:0/772416609' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:18.755 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:18 vm03 bash[17524]: audit 2026-03-09T14:28:18.048686+0000 mon.a (mon.0) 790 : audit [DBG] from='client.? 192.168.123.103:0/3554758794' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:18.755 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:18 vm03 bash[17524]: audit 2026-03-09T14:28:18.048686+0000 mon.a (mon.0) 790 : audit [DBG] from='client.? 192.168.123.103:0/3554758794' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:18.755 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:18 vm03 bash[17524]: audit 2026-03-09T14:28:18.054634+0000 mon.a (mon.0) 791 : audit [DBG] from='client.? 192.168.123.103:0/3956686956' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:18.755 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:18 vm03 bash[17524]: audit 2026-03-09T14:28:18.054634+0000 mon.a (mon.0) 791 : audit [DBG] from='client.? 192.168.123.103:0/3956686956' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:18.755 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:18 vm03 bash[17524]: audit 2026-03-09T14:28:18.360924+0000 mon.a (mon.0) 792 : audit [DBG] from='client.? 192.168.123.103:0/3884240286' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:18.755 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:18 vm03 bash[17524]: audit 2026-03-09T14:28:18.360924+0000 mon.a (mon.0) 792 : audit [DBG] from='client.? 192.168.123.103:0/3884240286' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:18.755 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:18 vm03 bash[17524]: audit 2026-03-09T14:28:18.376512+0000 mon.c (mon.1) 32 : audit [DBG] from='client.? 192.168.123.103:0/2161893990' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:18.755 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:18 vm03 bash[17524]: audit 2026-03-09T14:28:18.376512+0000 mon.c (mon.1) 32 : audit [DBG] from='client.? 192.168.123.103:0/2161893990' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:18.755 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:18 vm03 bash[17524]: audit 2026-03-09T14:28:18.383112+0000 mon.c (mon.1) 33 : audit [DBG] from='client.? 192.168.123.103:0/956219478' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:18.755 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:18 vm03 bash[17524]: audit 2026-03-09T14:28:18.383112+0000 mon.c (mon.1) 33 : audit [DBG] from='client.? 192.168.123.103:0/956219478' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:18.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:18 vm04 bash[19581]: cluster 2026-03-09T14:28:16.995778+0000 mgr.x (mgr.14150) 496 : cluster [DBG] pgmap v394: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:28:18.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:18 vm04 bash[19581]: cluster 2026-03-09T14:28:16.995778+0000 mgr.x (mgr.14150) 496 : cluster [DBG] pgmap v394: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:28:18.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:18 vm04 bash[19581]: audit 2026-03-09T14:28:17.624244+0000 mon.a (mon.0) 784 : audit [DBG] from='client.? 192.168.123.103:0/2131528170' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:18.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:18 vm04 bash[19581]: audit 2026-03-09T14:28:17.624244+0000 mon.a (mon.0) 784 : audit [DBG] from='client.? 192.168.123.103:0/2131528170' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:18.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:18 vm04 bash[19581]: audit 2026-03-09T14:28:17.638491+0000 mon.a (mon.0) 785 : audit [DBG] from='client.? 192.168.123.103:0/1390806493' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:18.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:18 vm04 bash[19581]: audit 2026-03-09T14:28:17.638491+0000 mon.a (mon.0) 785 : audit [DBG] from='client.? 192.168.123.103:0/1390806493' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:18.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:18 vm04 bash[19581]: audit 2026-03-09T14:28:17.645015+0000 mon.a (mon.0) 786 : audit [DBG] from='client.? 192.168.123.103:0/3518767582' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:18.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:18 vm04 bash[19581]: audit 2026-03-09T14:28:17.645015+0000 mon.a (mon.0) 786 : audit [DBG] from='client.? 192.168.123.103:0/3518767582' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:18.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:18 vm04 bash[19581]: audit 2026-03-09T14:28:17.682462+0000 mon.a (mon.0) 787 : audit [DBG] from='client.? 192.168.123.103:0/2603176440' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:18.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:18 vm04 bash[19581]: audit 2026-03-09T14:28:17.682462+0000 mon.a (mon.0) 787 : audit [DBG] from='client.? 192.168.123.103:0/2603176440' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:18.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:18 vm04 bash[19581]: audit 2026-03-09T14:28:17.688377+0000 mon.a (mon.0) 788 : audit [DBG] from='client.? 192.168.123.103:0/3152421694' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:18.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:18 vm04 bash[19581]: audit 2026-03-09T14:28:17.688377+0000 mon.a (mon.0) 788 : audit [DBG] from='client.? 192.168.123.103:0/3152421694' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:18.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:18 vm04 bash[19581]: audit 2026-03-09T14:28:17.990373+0000 mon.b (mon.2) 31 : audit [DBG] from='client.? 192.168.123.103:0/609470043' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:18.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:18 vm04 bash[19581]: audit 2026-03-09T14:28:17.990373+0000 mon.b (mon.2) 31 : audit [DBG] from='client.? 192.168.123.103:0/609470043' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:18.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:18 vm04 bash[19581]: audit 2026-03-09T14:28:18.005442+0000 mon.c (mon.1) 31 : audit [DBG] from='client.? 192.168.123.103:0/3082007635' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:18.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:18 vm04 bash[19581]: audit 2026-03-09T14:28:18.005442+0000 mon.c (mon.1) 31 : audit [DBG] from='client.? 192.168.123.103:0/3082007635' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:18.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:18 vm04 bash[19581]: audit 2026-03-09T14:28:18.011847+0000 mon.a (mon.0) 789 : audit [DBG] from='client.? 192.168.123.103:0/772416609' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:18.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:18 vm05 bash[20070]: cluster 2026-03-09T14:28:16.995778+0000 mgr.x (mgr.14150) 496 : cluster [DBG] pgmap v394: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:28:18.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:18 vm05 bash[20070]: cluster 2026-03-09T14:28:16.995778+0000 mgr.x (mgr.14150) 496 : cluster [DBG] pgmap v394: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-09T14:28:18.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:18 vm05 bash[20070]: audit 2026-03-09T14:28:17.624244+0000 mon.a (mon.0) 784 : audit [DBG] from='client.? 192.168.123.103:0/2131528170' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:18.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:18 vm05 bash[20070]: audit 2026-03-09T14:28:17.624244+0000 mon.a (mon.0) 784 : audit [DBG] from='client.? 192.168.123.103:0/2131528170' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:18.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:18 vm05 bash[20070]: audit 2026-03-09T14:28:17.638491+0000 mon.a (mon.0) 785 : audit [DBG] from='client.? 192.168.123.103:0/1390806493' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:18.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:18 vm05 bash[20070]: audit 2026-03-09T14:28:17.638491+0000 mon.a (mon.0) 785 : audit [DBG] from='client.? 192.168.123.103:0/1390806493' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:18.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:18 vm05 bash[20070]: audit 2026-03-09T14:28:17.645015+0000 mon.a (mon.0) 786 : audit [DBG] from='client.? 192.168.123.103:0/3518767582' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:18.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:18 vm05 bash[20070]: audit 2026-03-09T14:28:17.645015+0000 mon.a (mon.0) 786 : audit [DBG] from='client.? 192.168.123.103:0/3518767582' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:18.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:18 vm05 bash[20070]: audit 2026-03-09T14:28:17.682462+0000 mon.a (mon.0) 787 : audit [DBG] from='client.? 192.168.123.103:0/2603176440' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:18.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:18 vm05 bash[20070]: audit 2026-03-09T14:28:17.682462+0000 mon.a (mon.0) 787 : audit [DBG] from='client.? 192.168.123.103:0/2603176440' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:18.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:18 vm05 bash[20070]: audit 2026-03-09T14:28:17.688377+0000 mon.a (mon.0) 788 : audit [DBG] from='client.? 192.168.123.103:0/3152421694' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:18.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:18 vm05 bash[20070]: audit 2026-03-09T14:28:17.688377+0000 mon.a (mon.0) 788 : audit [DBG] from='client.? 192.168.123.103:0/3152421694' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:18.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:18 vm05 bash[20070]: audit 2026-03-09T14:28:17.990373+0000 mon.b (mon.2) 31 : audit [DBG] from='client.? 192.168.123.103:0/609470043' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:18.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:18 vm05 bash[20070]: audit 2026-03-09T14:28:17.990373+0000 mon.b (mon.2) 31 : audit [DBG] from='client.? 192.168.123.103:0/609470043' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:18.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:18 vm05 bash[20070]: audit 2026-03-09T14:28:18.005442+0000 mon.c (mon.1) 31 : audit [DBG] from='client.? 192.168.123.103:0/3082007635' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:18.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:18 vm05 bash[20070]: audit 2026-03-09T14:28:18.005442+0000 mon.c (mon.1) 31 : audit [DBG] from='client.? 192.168.123.103:0/3082007635' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:18.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:18 vm05 bash[20070]: audit 2026-03-09T14:28:18.011847+0000 mon.a (mon.0) 789 : audit [DBG] from='client.? 192.168.123.103:0/772416609' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:18.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:18 vm05 bash[20070]: audit 2026-03-09T14:28:18.011847+0000 mon.a (mon.0) 789 : audit [DBG] from='client.? 192.168.123.103:0/772416609' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:18.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:18 vm05 bash[20070]: audit 2026-03-09T14:28:18.048686+0000 mon.a (mon.0) 790 : audit [DBG] from='client.? 192.168.123.103:0/3554758794' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:18.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:18 vm05 bash[20070]: audit 2026-03-09T14:28:18.048686+0000 mon.a (mon.0) 790 : audit [DBG] from='client.? 192.168.123.103:0/3554758794' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:18.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:18 vm05 bash[20070]: audit 2026-03-09T14:28:18.054634+0000 mon.a (mon.0) 791 : audit [DBG] from='client.? 192.168.123.103:0/3956686956' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:18.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:18 vm05 bash[20070]: audit 2026-03-09T14:28:18.054634+0000 mon.a (mon.0) 791 : audit [DBG] from='client.? 192.168.123.103:0/3956686956' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:18.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:18 vm05 bash[20070]: audit 2026-03-09T14:28:18.360924+0000 mon.a (mon.0) 792 : audit [DBG] from='client.? 192.168.123.103:0/3884240286' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:18.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:18 vm05 bash[20070]: audit 2026-03-09T14:28:18.360924+0000 mon.a (mon.0) 792 : audit [DBG] from='client.? 192.168.123.103:0/3884240286' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:18.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:18 vm05 bash[20070]: audit 2026-03-09T14:28:18.376512+0000 mon.c (mon.1) 32 : audit [DBG] from='client.? 192.168.123.103:0/2161893990' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:18.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:18 vm05 bash[20070]: audit 2026-03-09T14:28:18.376512+0000 mon.c (mon.1) 32 : audit [DBG] from='client.? 192.168.123.103:0/2161893990' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:18.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:18 vm05 bash[20070]: audit 2026-03-09T14:28:18.383112+0000 mon.c (mon.1) 33 : audit [DBG] from='client.? 192.168.123.103:0/956219478' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:18.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:18 vm05 bash[20070]: audit 2026-03-09T14:28:18.383112+0000 mon.c (mon.1) 33 : audit [DBG] from='client.? 192.168.123.103:0/956219478' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:18.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:18 vm04 bash[19581]: audit 2026-03-09T14:28:18.011847+0000 mon.a (mon.0) 789 : audit [DBG] from='client.? 192.168.123.103:0/772416609' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:18.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:18 vm04 bash[19581]: audit 2026-03-09T14:28:18.048686+0000 mon.a (mon.0) 790 : audit [DBG] from='client.? 192.168.123.103:0/3554758794' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:18.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:18 vm04 bash[19581]: audit 2026-03-09T14:28:18.048686+0000 mon.a (mon.0) 790 : audit [DBG] from='client.? 192.168.123.103:0/3554758794' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:18.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:18 vm04 bash[19581]: audit 2026-03-09T14:28:18.054634+0000 mon.a (mon.0) 791 : audit [DBG] from='client.? 192.168.123.103:0/3956686956' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:18.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:18 vm04 bash[19581]: audit 2026-03-09T14:28:18.054634+0000 mon.a (mon.0) 791 : audit [DBG] from='client.? 192.168.123.103:0/3956686956' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:18.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:18 vm04 bash[19581]: audit 2026-03-09T14:28:18.360924+0000 mon.a (mon.0) 792 : audit [DBG] from='client.? 192.168.123.103:0/3884240286' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:18.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:18 vm04 bash[19581]: audit 2026-03-09T14:28:18.360924+0000 mon.a (mon.0) 792 : audit [DBG] from='client.? 192.168.123.103:0/3884240286' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:18.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:18 vm04 bash[19581]: audit 2026-03-09T14:28:18.376512+0000 mon.c (mon.1) 32 : audit [DBG] from='client.? 192.168.123.103:0/2161893990' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:18.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:18 vm04 bash[19581]: audit 2026-03-09T14:28:18.376512+0000 mon.c (mon.1) 32 : audit [DBG] from='client.? 192.168.123.103:0/2161893990' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:18.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:18 vm04 bash[19581]: audit 2026-03-09T14:28:18.383112+0000 mon.c (mon.1) 33 : audit [DBG] from='client.? 192.168.123.103:0/956219478' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:18.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:18 vm04 bash[19581]: audit 2026-03-09T14:28:18.383112+0000 mon.c (mon.1) 33 : audit [DBG] from='client.? 192.168.123.103:0/956219478' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:19.050 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:18 vm03 bash[37744]: debug ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:18] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:19.050 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:18 vm03 bash[37744]: ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:18] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:19.255 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:18 vm05 bash[38699]: debug there is no tcmu-runner data available 2026-03-09T14:28:19.392 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:19 vm03 bash[37744]: debug ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:19] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:19.392 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:19 vm03 bash[37744]: ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:19] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:19.506 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:19 vm03 bash[17524]: audit 2026-03-09T14:28:17.909482+0000 mgr.x (mgr.14150) 497 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:19.506 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:19 vm03 bash[17524]: audit 2026-03-09T14:28:17.909482+0000 mgr.x (mgr.14150) 497 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:19.506 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:19 vm03 bash[17524]: audit 2026-03-09T14:28:18.421819+0000 mon.c (mon.1) 34 : audit [DBG] from='client.? 192.168.123.103:0/3113621942' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:19.506 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:19 vm03 bash[17524]: audit 2026-03-09T14:28:18.421819+0000 mon.c (mon.1) 34 : audit [DBG] from='client.? 192.168.123.103:0/3113621942' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:19.506 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:19 vm03 bash[17524]: audit 2026-03-09T14:28:18.427207+0000 mon.a (mon.0) 793 : audit [DBG] from='client.? 192.168.123.103:0/3254919808' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:19.506 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:19 vm03 bash[17524]: audit 2026-03-09T14:28:18.427207+0000 mon.a (mon.0) 793 : audit [DBG] from='client.? 192.168.123.103:0/3254919808' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:19.506 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:19 vm03 bash[17524]: audit 2026-03-09T14:28:18.731857+0000 mon.b (mon.2) 32 : audit [DBG] from='client.? 192.168.123.103:0/770216215' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:19.506 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:19 vm03 bash[17524]: audit 2026-03-09T14:28:18.731857+0000 mon.b (mon.2) 32 : audit [DBG] from='client.? 192.168.123.103:0/770216215' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:19.506 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:19 vm03 bash[17524]: audit 2026-03-09T14:28:18.746535+0000 mon.a (mon.0) 794 : audit [DBG] from='client.? 192.168.123.103:0/3661005783' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:19.506 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:19 vm03 bash[17524]: audit 2026-03-09T14:28:18.746535+0000 mon.a (mon.0) 794 : audit [DBG] from='client.? 192.168.123.103:0/3661005783' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:19.506 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:19 vm03 bash[17524]: audit 2026-03-09T14:28:18.752843+0000 mon.a (mon.0) 795 : audit [DBG] from='client.? 192.168.123.103:0/2333810720' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:19.506 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:19 vm03 bash[17524]: audit 2026-03-09T14:28:18.752843+0000 mon.a (mon.0) 795 : audit [DBG] from='client.? 192.168.123.103:0/2333810720' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:19.506 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:19 vm03 bash[17524]: audit 2026-03-09T14:28:18.788331+0000 mon.a (mon.0) 796 : audit [DBG] from='client.? 192.168.123.103:0/2525166426' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:19.507 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:19 vm03 bash[17524]: audit 2026-03-09T14:28:18.788331+0000 mon.a (mon.0) 796 : audit [DBG] from='client.? 192.168.123.103:0/2525166426' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:19.507 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:19 vm03 bash[17524]: audit 2026-03-09T14:28:18.793732+0000 mon.a (mon.0) 797 : audit [DBG] from='client.? 192.168.123.103:0/1493755649' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:19.507 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:19 vm03 bash[17524]: audit 2026-03-09T14:28:18.793732+0000 mon.a (mon.0) 797 : audit [DBG] from='client.? 192.168.123.103:0/1493755649' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:19.507 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:19 vm03 bash[17524]: audit 2026-03-09T14:28:19.099393+0000 mon.b (mon.2) 33 : audit [DBG] from='client.? 192.168.123.103:0/3774195662' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:19.507 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:19 vm03 bash[17524]: audit 2026-03-09T14:28:19.099393+0000 mon.b (mon.2) 33 : audit [DBG] from='client.? 192.168.123.103:0/3774195662' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:19.507 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:19 vm03 bash[17524]: audit 2026-03-09T14:28:19.114553+0000 mon.a (mon.0) 798 : audit [DBG] from='client.? 192.168.123.103:0/3571368454' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:19.507 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:19 vm03 bash[17524]: audit 2026-03-09T14:28:19.114553+0000 mon.a (mon.0) 798 : audit [DBG] from='client.? 192.168.123.103:0/3571368454' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:19.507 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:19 vm03 bash[17524]: audit 2026-03-09T14:28:19.120704+0000 mon.a (mon.0) 799 : audit [DBG] from='client.? 192.168.123.103:0/1989081478' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:19.507 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:19 vm03 bash[17524]: audit 2026-03-09T14:28:19.120704+0000 mon.a (mon.0) 799 : audit [DBG] from='client.? 192.168.123.103:0/1989081478' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:19.507 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:19 vm03 bash[17524]: audit 2026-03-09T14:28:19.156444+0000 mon.a (mon.0) 800 : audit [DBG] from='client.? 192.168.123.103:0/1069682459' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:19.507 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:19 vm03 bash[17524]: audit 2026-03-09T14:28:19.156444+0000 mon.a (mon.0) 800 : audit [DBG] from='client.? 192.168.123.103:0/1069682459' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:19.507 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:19 vm03 bash[17524]: audit 2026-03-09T14:28:19.162003+0000 mon.a (mon.0) 801 : audit [DBG] from='client.? 192.168.123.103:0/2566658086' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:19.507 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:19 vm03 bash[17524]: audit 2026-03-09T14:28:19.162003+0000 mon.a (mon.0) 801 : audit [DBG] from='client.? 192.168.123.103:0/2566658086' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:19.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:19 vm04 bash[19581]: audit 2026-03-09T14:28:17.909482+0000 mgr.x (mgr.14150) 497 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:19.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:19 vm04 bash[19581]: audit 2026-03-09T14:28:17.909482+0000 mgr.x (mgr.14150) 497 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:19.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:19 vm04 bash[19581]: audit 2026-03-09T14:28:18.421819+0000 mon.c (mon.1) 34 : audit [DBG] from='client.? 192.168.123.103:0/3113621942' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:19.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:19 vm04 bash[19581]: audit 2026-03-09T14:28:18.421819+0000 mon.c (mon.1) 34 : audit [DBG] from='client.? 192.168.123.103:0/3113621942' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:19.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:19 vm04 bash[19581]: audit 2026-03-09T14:28:18.427207+0000 mon.a (mon.0) 793 : audit [DBG] from='client.? 192.168.123.103:0/3254919808' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:19.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:19 vm04 bash[19581]: audit 2026-03-09T14:28:18.427207+0000 mon.a (mon.0) 793 : audit [DBG] from='client.? 192.168.123.103:0/3254919808' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:19.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:19 vm04 bash[19581]: audit 2026-03-09T14:28:18.731857+0000 mon.b (mon.2) 32 : audit [DBG] from='client.? 192.168.123.103:0/770216215' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:19.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:19 vm04 bash[19581]: audit 2026-03-09T14:28:18.731857+0000 mon.b (mon.2) 32 : audit [DBG] from='client.? 192.168.123.103:0/770216215' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:19.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:19 vm04 bash[19581]: audit 2026-03-09T14:28:18.746535+0000 mon.a (mon.0) 794 : audit [DBG] from='client.? 192.168.123.103:0/3661005783' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:19.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:19 vm04 bash[19581]: audit 2026-03-09T14:28:18.746535+0000 mon.a (mon.0) 794 : audit [DBG] from='client.? 192.168.123.103:0/3661005783' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:19.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:19 vm04 bash[19581]: audit 2026-03-09T14:28:18.752843+0000 mon.a (mon.0) 795 : audit [DBG] from='client.? 192.168.123.103:0/2333810720' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:19.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:19 vm04 bash[19581]: audit 2026-03-09T14:28:18.752843+0000 mon.a (mon.0) 795 : audit [DBG] from='client.? 192.168.123.103:0/2333810720' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:19.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:19 vm04 bash[19581]: audit 2026-03-09T14:28:18.788331+0000 mon.a (mon.0) 796 : audit [DBG] from='client.? 192.168.123.103:0/2525166426' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:19.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:19 vm04 bash[19581]: audit 2026-03-09T14:28:18.788331+0000 mon.a (mon.0) 796 : audit [DBG] from='client.? 192.168.123.103:0/2525166426' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:19.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:19 vm04 bash[19581]: audit 2026-03-09T14:28:18.793732+0000 mon.a (mon.0) 797 : audit [DBG] from='client.? 192.168.123.103:0/1493755649' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:19.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:19 vm04 bash[19581]: audit 2026-03-09T14:28:18.793732+0000 mon.a (mon.0) 797 : audit [DBG] from='client.? 192.168.123.103:0/1493755649' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:19.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:19 vm04 bash[19581]: audit 2026-03-09T14:28:19.099393+0000 mon.b (mon.2) 33 : audit [DBG] from='client.? 192.168.123.103:0/3774195662' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:19.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:19 vm04 bash[19581]: audit 2026-03-09T14:28:19.099393+0000 mon.b (mon.2) 33 : audit [DBG] from='client.? 192.168.123.103:0/3774195662' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:19.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:19 vm04 bash[19581]: audit 2026-03-09T14:28:19.114553+0000 mon.a (mon.0) 798 : audit [DBG] from='client.? 192.168.123.103:0/3571368454' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:19.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:19 vm04 bash[19581]: audit 2026-03-09T14:28:19.114553+0000 mon.a (mon.0) 798 : audit [DBG] from='client.? 192.168.123.103:0/3571368454' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:19.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:19 vm04 bash[19581]: audit 2026-03-09T14:28:19.120704+0000 mon.a (mon.0) 799 : audit [DBG] from='client.? 192.168.123.103:0/1989081478' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:19.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:19 vm04 bash[19581]: audit 2026-03-09T14:28:19.120704+0000 mon.a (mon.0) 799 : audit [DBG] from='client.? 192.168.123.103:0/1989081478' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:19.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:19 vm04 bash[19581]: audit 2026-03-09T14:28:19.156444+0000 mon.a (mon.0) 800 : audit [DBG] from='client.? 192.168.123.103:0/1069682459' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:19.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:19 vm04 bash[19581]: audit 2026-03-09T14:28:19.156444+0000 mon.a (mon.0) 800 : audit [DBG] from='client.? 192.168.123.103:0/1069682459' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:19.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:19 vm04 bash[19581]: audit 2026-03-09T14:28:19.162003+0000 mon.a (mon.0) 801 : audit [DBG] from='client.? 192.168.123.103:0/2566658086' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:19.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:19 vm04 bash[19581]: audit 2026-03-09T14:28:19.162003+0000 mon.a (mon.0) 801 : audit [DBG] from='client.? 192.168.123.103:0/2566658086' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:19.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:19 vm05 bash[20070]: audit 2026-03-09T14:28:17.909482+0000 mgr.x (mgr.14150) 497 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:19.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:19 vm05 bash[20070]: audit 2026-03-09T14:28:17.909482+0000 mgr.x (mgr.14150) 497 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:19.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:19 vm05 bash[20070]: audit 2026-03-09T14:28:18.421819+0000 mon.c (mon.1) 34 : audit [DBG] from='client.? 192.168.123.103:0/3113621942' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:19.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:19 vm05 bash[20070]: audit 2026-03-09T14:28:18.421819+0000 mon.c (mon.1) 34 : audit [DBG] from='client.? 192.168.123.103:0/3113621942' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:19.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:19 vm05 bash[20070]: audit 2026-03-09T14:28:18.427207+0000 mon.a (mon.0) 793 : audit [DBG] from='client.? 192.168.123.103:0/3254919808' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:19.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:19 vm05 bash[20070]: audit 2026-03-09T14:28:18.427207+0000 mon.a (mon.0) 793 : audit [DBG] from='client.? 192.168.123.103:0/3254919808' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:19.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:19 vm05 bash[20070]: audit 2026-03-09T14:28:18.731857+0000 mon.b (mon.2) 32 : audit [DBG] from='client.? 192.168.123.103:0/770216215' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:19.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:19 vm05 bash[20070]: audit 2026-03-09T14:28:18.731857+0000 mon.b (mon.2) 32 : audit [DBG] from='client.? 192.168.123.103:0/770216215' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:19.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:19 vm05 bash[20070]: audit 2026-03-09T14:28:18.746535+0000 mon.a (mon.0) 794 : audit [DBG] from='client.? 192.168.123.103:0/3661005783' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:19.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:19 vm05 bash[20070]: audit 2026-03-09T14:28:18.746535+0000 mon.a (mon.0) 794 : audit [DBG] from='client.? 192.168.123.103:0/3661005783' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:19.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:19 vm05 bash[20070]: audit 2026-03-09T14:28:18.752843+0000 mon.a (mon.0) 795 : audit [DBG] from='client.? 192.168.123.103:0/2333810720' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:19.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:19 vm05 bash[20070]: audit 2026-03-09T14:28:18.752843+0000 mon.a (mon.0) 795 : audit [DBG] from='client.? 192.168.123.103:0/2333810720' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:19.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:19 vm05 bash[20070]: audit 2026-03-09T14:28:18.788331+0000 mon.a (mon.0) 796 : audit [DBG] from='client.? 192.168.123.103:0/2525166426' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:19.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:19 vm05 bash[20070]: audit 2026-03-09T14:28:18.788331+0000 mon.a (mon.0) 796 : audit [DBG] from='client.? 192.168.123.103:0/2525166426' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:19.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:19 vm05 bash[20070]: audit 2026-03-09T14:28:18.793732+0000 mon.a (mon.0) 797 : audit [DBG] from='client.? 192.168.123.103:0/1493755649' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:19.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:19 vm05 bash[20070]: audit 2026-03-09T14:28:18.793732+0000 mon.a (mon.0) 797 : audit [DBG] from='client.? 192.168.123.103:0/1493755649' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:19.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:19 vm05 bash[20070]: audit 2026-03-09T14:28:19.099393+0000 mon.b (mon.2) 33 : audit [DBG] from='client.? 192.168.123.103:0/3774195662' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:19.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:19 vm05 bash[20070]: audit 2026-03-09T14:28:19.099393+0000 mon.b (mon.2) 33 : audit [DBG] from='client.? 192.168.123.103:0/3774195662' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:19.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:19 vm05 bash[20070]: audit 2026-03-09T14:28:19.114553+0000 mon.a (mon.0) 798 : audit [DBG] from='client.? 192.168.123.103:0/3571368454' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:19.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:19 vm05 bash[20070]: audit 2026-03-09T14:28:19.114553+0000 mon.a (mon.0) 798 : audit [DBG] from='client.? 192.168.123.103:0/3571368454' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:19.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:19 vm05 bash[20070]: audit 2026-03-09T14:28:19.120704+0000 mon.a (mon.0) 799 : audit [DBG] from='client.? 192.168.123.103:0/1989081478' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:19.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:19 vm05 bash[20070]: audit 2026-03-09T14:28:19.120704+0000 mon.a (mon.0) 799 : audit [DBG] from='client.? 192.168.123.103:0/1989081478' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:19.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:19 vm05 bash[20070]: audit 2026-03-09T14:28:19.156444+0000 mon.a (mon.0) 800 : audit [DBG] from='client.? 192.168.123.103:0/1069682459' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:19.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:19 vm05 bash[20070]: audit 2026-03-09T14:28:19.156444+0000 mon.a (mon.0) 800 : audit [DBG] from='client.? 192.168.123.103:0/1069682459' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:19.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:19 vm05 bash[20070]: audit 2026-03-09T14:28:19.162003+0000 mon.a (mon.0) 801 : audit [DBG] from='client.? 192.168.123.103:0/2566658086' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:19.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:19 vm05 bash[20070]: audit 2026-03-09T14:28:19.162003+0000 mon.a (mon.0) 801 : audit [DBG] from='client.? 192.168.123.103:0/2566658086' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:19.800 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:19 vm03 bash[37744]: debug ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:19] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:19.801 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:19 vm03 bash[37744]: ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:19] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:20.269 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:19 vm03 bash[37744]: debug ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:19] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:20.269 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:19 vm03 bash[37744]: ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:19] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:20.550 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:20 vm03 bash[37744]: debug ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:20] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:20.550 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:20 vm03 bash[37744]: ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:20] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:20.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:20 vm03 bash[17524]: audit 2026-03-09T14:28:18.880456+0000 mgr.x (mgr.14150) 498 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:20.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:20 vm03 bash[17524]: audit 2026-03-09T14:28:18.880456+0000 mgr.x (mgr.14150) 498 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:20.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:20 vm03 bash[17524]: cluster 2026-03-09T14:28:18.996006+0000 mgr.x (mgr.14150) 499 : cluster [DBG] pgmap v395: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 170 B/s wr, 2 op/s 2026-03-09T14:28:20.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:20 vm03 bash[17524]: cluster 2026-03-09T14:28:18.996006+0000 mgr.x (mgr.14150) 499 : cluster [DBG] pgmap v395: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 170 B/s wr, 2 op/s 2026-03-09T14:28:20.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:20 vm03 bash[17524]: audit 2026-03-09T14:28:19.482180+0000 mon.a (mon.0) 802 : audit [DBG] from='client.? 192.168.123.103:0/3055652550' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:20.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:20 vm03 bash[17524]: audit 2026-03-09T14:28:19.482180+0000 mon.a (mon.0) 802 : audit [DBG] from='client.? 192.168.123.103:0/3055652550' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:20.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:20 vm03 bash[17524]: audit 2026-03-09T14:28:19.497759+0000 mon.a (mon.0) 803 : audit [DBG] from='client.? 192.168.123.103:0/95855360' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:20.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:20 vm03 bash[17524]: audit 2026-03-09T14:28:19.497759+0000 mon.a (mon.0) 803 : audit [DBG] from='client.? 192.168.123.103:0/95855360' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:20.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:20 vm03 bash[17524]: audit 2026-03-09T14:28:19.504878+0000 mon.a (mon.0) 804 : audit [DBG] from='client.? 192.168.123.103:0/3687075535' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:20.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:20 vm03 bash[17524]: audit 2026-03-09T14:28:19.504878+0000 mon.a (mon.0) 804 : audit [DBG] from='client.? 192.168.123.103:0/3687075535' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:20.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:20 vm03 bash[17524]: audit 2026-03-09T14:28:19.541905+0000 mon.a (mon.0) 805 : audit [DBG] from='client.? 192.168.123.103:0/298429480' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:20.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:20 vm03 bash[17524]: audit 2026-03-09T14:28:19.541905+0000 mon.a (mon.0) 805 : audit [DBG] from='client.? 192.168.123.103:0/298429480' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:20.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:20 vm03 bash[17524]: audit 2026-03-09T14:28:19.548256+0000 mon.a (mon.0) 806 : audit [DBG] from='client.? 192.168.123.103:0/4139324667' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:20.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:20 vm03 bash[17524]: audit 2026-03-09T14:28:19.548256+0000 mon.a (mon.0) 806 : audit [DBG] from='client.? 192.168.123.103:0/4139324667' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:20.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:20 vm03 bash[17524]: audit 2026-03-09T14:28:19.857410+0000 mon.a (mon.0) 807 : audit [DBG] from='client.? 192.168.123.103:0/3790709337' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:20.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:20 vm03 bash[17524]: audit 2026-03-09T14:28:19.857410+0000 mon.a (mon.0) 807 : audit [DBG] from='client.? 192.168.123.103:0/3790709337' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:20.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:20 vm03 bash[17524]: audit 2026-03-09T14:28:19.872488+0000 mon.a (mon.0) 808 : audit [DBG] from='client.? 192.168.123.103:0/644362766' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:20.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:20 vm03 bash[17524]: audit 2026-03-09T14:28:19.872488+0000 mon.a (mon.0) 808 : audit [DBG] from='client.? 192.168.123.103:0/644362766' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:20.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:20 vm03 bash[17524]: audit 2026-03-09T14:28:19.878846+0000 mon.a (mon.0) 809 : audit [DBG] from='client.? 192.168.123.103:0/882107634' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:20.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:20 vm03 bash[17524]: audit 2026-03-09T14:28:19.878846+0000 mon.a (mon.0) 809 : audit [DBG] from='client.? 192.168.123.103:0/882107634' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:20.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:20 vm03 bash[17524]: audit 2026-03-09T14:28:19.914215+0000 mon.a (mon.0) 810 : audit [DBG] from='client.? 192.168.123.103:0/1256693267' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:20.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:20 vm03 bash[17524]: audit 2026-03-09T14:28:19.914215+0000 mon.a (mon.0) 810 : audit [DBG] from='client.? 192.168.123.103:0/1256693267' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:20.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:20 vm03 bash[17524]: audit 2026-03-09T14:28:19.920203+0000 mon.a (mon.0) 811 : audit [DBG] from='client.? 192.168.123.103:0/4127295645' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:20.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:20 vm03 bash[17524]: audit 2026-03-09T14:28:19.920203+0000 mon.a (mon.0) 811 : audit [DBG] from='client.? 192.168.123.103:0/4127295645' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:20.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:20 vm03 bash[17524]: audit 2026-03-09T14:28:20.245855+0000 mon.a (mon.0) 812 : audit [DBG] from='client.? 192.168.123.103:0/2826029145' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:20.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:20 vm03 bash[17524]: audit 2026-03-09T14:28:20.245855+0000 mon.a (mon.0) 812 : audit [DBG] from='client.? 192.168.123.103:0/2826029145' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:20.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:20 vm03 bash[17524]: audit 2026-03-09T14:28:20.260956+0000 mon.a (mon.0) 813 : audit [DBG] from='client.? 192.168.123.103:0/2198191709' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:20.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:20 vm03 bash[17524]: audit 2026-03-09T14:28:20.260956+0000 mon.a (mon.0) 813 : audit [DBG] from='client.? 192.168.123.103:0/2198191709' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:20.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:20 vm03 bash[17524]: audit 2026-03-09T14:28:20.267887+0000 mon.a (mon.0) 814 : audit [DBG] from='client.? 192.168.123.103:0/2396542392' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:20.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:20 vm03 bash[17524]: audit 2026-03-09T14:28:20.267887+0000 mon.a (mon.0) 814 : audit [DBG] from='client.? 192.168.123.103:0/2396542392' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:20.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:20 vm03 bash[17524]: audit 2026-03-09T14:28:20.304129+0000 mon.a (mon.0) 815 : audit [DBG] from='client.? 192.168.123.103:0/2615562212' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:20.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:20 vm03 bash[17524]: audit 2026-03-09T14:28:20.304129+0000 mon.a (mon.0) 815 : audit [DBG] from='client.? 192.168.123.103:0/2615562212' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:20.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:20 vm03 bash[17524]: audit 2026-03-09T14:28:20.310275+0000 mon.a (mon.0) 816 : audit [DBG] from='client.? 192.168.123.103:0/2504084196' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:20.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:20 vm03 bash[17524]: audit 2026-03-09T14:28:20.310275+0000 mon.a (mon.0) 816 : audit [DBG] from='client.? 192.168.123.103:0/2504084196' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:20.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:20 vm04 bash[19581]: audit 2026-03-09T14:28:18.880456+0000 mgr.x (mgr.14150) 498 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:20.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:20 vm04 bash[19581]: audit 2026-03-09T14:28:18.880456+0000 mgr.x (mgr.14150) 498 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:20.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:20 vm04 bash[19581]: cluster 2026-03-09T14:28:18.996006+0000 mgr.x (mgr.14150) 499 : cluster [DBG] pgmap v395: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 170 B/s wr, 2 op/s 2026-03-09T14:28:20.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:20 vm04 bash[19581]: cluster 2026-03-09T14:28:18.996006+0000 mgr.x (mgr.14150) 499 : cluster [DBG] pgmap v395: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 170 B/s wr, 2 op/s 2026-03-09T14:28:20.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:20 vm04 bash[19581]: audit 2026-03-09T14:28:19.482180+0000 mon.a (mon.0) 802 : audit [DBG] from='client.? 192.168.123.103:0/3055652550' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:20.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:20 vm04 bash[19581]: audit 2026-03-09T14:28:19.482180+0000 mon.a (mon.0) 802 : audit [DBG] from='client.? 192.168.123.103:0/3055652550' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:20.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:20 vm04 bash[19581]: audit 2026-03-09T14:28:19.497759+0000 mon.a (mon.0) 803 : audit [DBG] from='client.? 192.168.123.103:0/95855360' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:20.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:20 vm04 bash[19581]: audit 2026-03-09T14:28:19.497759+0000 mon.a (mon.0) 803 : audit [DBG] from='client.? 192.168.123.103:0/95855360' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:20.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:20 vm04 bash[19581]: audit 2026-03-09T14:28:19.504878+0000 mon.a (mon.0) 804 : audit [DBG] from='client.? 192.168.123.103:0/3687075535' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:20.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:20 vm04 bash[19581]: audit 2026-03-09T14:28:19.504878+0000 mon.a (mon.0) 804 : audit [DBG] from='client.? 192.168.123.103:0/3687075535' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:20.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:20 vm04 bash[19581]: audit 2026-03-09T14:28:19.541905+0000 mon.a (mon.0) 805 : audit [DBG] from='client.? 192.168.123.103:0/298429480' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:20.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:20 vm04 bash[19581]: audit 2026-03-09T14:28:19.541905+0000 mon.a (mon.0) 805 : audit [DBG] from='client.? 192.168.123.103:0/298429480' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:20.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:20 vm04 bash[19581]: audit 2026-03-09T14:28:19.548256+0000 mon.a (mon.0) 806 : audit [DBG] from='client.? 192.168.123.103:0/4139324667' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:20.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:20 vm04 bash[19581]: audit 2026-03-09T14:28:19.548256+0000 mon.a (mon.0) 806 : audit [DBG] from='client.? 192.168.123.103:0/4139324667' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:20.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:20 vm04 bash[19581]: audit 2026-03-09T14:28:19.857410+0000 mon.a (mon.0) 807 : audit [DBG] from='client.? 192.168.123.103:0/3790709337' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:20.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:20 vm04 bash[19581]: audit 2026-03-09T14:28:19.857410+0000 mon.a (mon.0) 807 : audit [DBG] from='client.? 192.168.123.103:0/3790709337' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:20.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:20 vm04 bash[19581]: audit 2026-03-09T14:28:19.872488+0000 mon.a (mon.0) 808 : audit [DBG] from='client.? 192.168.123.103:0/644362766' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:20.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:20 vm04 bash[19581]: audit 2026-03-09T14:28:19.872488+0000 mon.a (mon.0) 808 : audit [DBG] from='client.? 192.168.123.103:0/644362766' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:20.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:20 vm04 bash[19581]: audit 2026-03-09T14:28:19.878846+0000 mon.a (mon.0) 809 : audit [DBG] from='client.? 192.168.123.103:0/882107634' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:20.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:20 vm04 bash[19581]: audit 2026-03-09T14:28:19.878846+0000 mon.a (mon.0) 809 : audit [DBG] from='client.? 192.168.123.103:0/882107634' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:20.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:20 vm04 bash[19581]: audit 2026-03-09T14:28:19.914215+0000 mon.a (mon.0) 810 : audit [DBG] from='client.? 192.168.123.103:0/1256693267' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:20.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:20 vm04 bash[19581]: audit 2026-03-09T14:28:19.914215+0000 mon.a (mon.0) 810 : audit [DBG] from='client.? 192.168.123.103:0/1256693267' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:20.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:20 vm04 bash[19581]: audit 2026-03-09T14:28:19.920203+0000 mon.a (mon.0) 811 : audit [DBG] from='client.? 192.168.123.103:0/4127295645' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:20.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:20 vm04 bash[19581]: audit 2026-03-09T14:28:19.920203+0000 mon.a (mon.0) 811 : audit [DBG] from='client.? 192.168.123.103:0/4127295645' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:20.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:20 vm04 bash[19581]: audit 2026-03-09T14:28:20.245855+0000 mon.a (mon.0) 812 : audit [DBG] from='client.? 192.168.123.103:0/2826029145' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:20.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:20 vm04 bash[19581]: audit 2026-03-09T14:28:20.245855+0000 mon.a (mon.0) 812 : audit [DBG] from='client.? 192.168.123.103:0/2826029145' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:20.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:20 vm04 bash[19581]: audit 2026-03-09T14:28:20.260956+0000 mon.a (mon.0) 813 : audit [DBG] from='client.? 192.168.123.103:0/2198191709' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:20.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:20 vm04 bash[19581]: audit 2026-03-09T14:28:20.260956+0000 mon.a (mon.0) 813 : audit [DBG] from='client.? 192.168.123.103:0/2198191709' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:20.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:20 vm04 bash[19581]: audit 2026-03-09T14:28:20.267887+0000 mon.a (mon.0) 814 : audit [DBG] from='client.? 192.168.123.103:0/2396542392' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:20.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:20 vm04 bash[19581]: audit 2026-03-09T14:28:20.267887+0000 mon.a (mon.0) 814 : audit [DBG] from='client.? 192.168.123.103:0/2396542392' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:20.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:20 vm04 bash[19581]: audit 2026-03-09T14:28:20.304129+0000 mon.a (mon.0) 815 : audit [DBG] from='client.? 192.168.123.103:0/2615562212' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:20.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:20 vm04 bash[19581]: audit 2026-03-09T14:28:20.304129+0000 mon.a (mon.0) 815 : audit [DBG] from='client.? 192.168.123.103:0/2615562212' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:20.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:20 vm04 bash[19581]: audit 2026-03-09T14:28:20.310275+0000 mon.a (mon.0) 816 : audit [DBG] from='client.? 192.168.123.103:0/2504084196' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:20.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:20 vm04 bash[19581]: audit 2026-03-09T14:28:20.310275+0000 mon.a (mon.0) 816 : audit [DBG] from='client.? 192.168.123.103:0/2504084196' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:20.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:20 vm05 bash[20070]: audit 2026-03-09T14:28:18.880456+0000 mgr.x (mgr.14150) 498 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:20.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:20 vm05 bash[20070]: audit 2026-03-09T14:28:18.880456+0000 mgr.x (mgr.14150) 498 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:20.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:20 vm05 bash[20070]: cluster 2026-03-09T14:28:18.996006+0000 mgr.x (mgr.14150) 499 : cluster [DBG] pgmap v395: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 170 B/s wr, 2 op/s 2026-03-09T14:28:20.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:20 vm05 bash[20070]: cluster 2026-03-09T14:28:18.996006+0000 mgr.x (mgr.14150) 499 : cluster [DBG] pgmap v395: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 170 B/s wr, 2 op/s 2026-03-09T14:28:20.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:20 vm05 bash[20070]: audit 2026-03-09T14:28:19.482180+0000 mon.a (mon.0) 802 : audit [DBG] from='client.? 192.168.123.103:0/3055652550' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:20.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:20 vm05 bash[20070]: audit 2026-03-09T14:28:19.482180+0000 mon.a (mon.0) 802 : audit [DBG] from='client.? 192.168.123.103:0/3055652550' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:20.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:20 vm05 bash[20070]: audit 2026-03-09T14:28:19.497759+0000 mon.a (mon.0) 803 : audit [DBG] from='client.? 192.168.123.103:0/95855360' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:20.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:20 vm05 bash[20070]: audit 2026-03-09T14:28:19.497759+0000 mon.a (mon.0) 803 : audit [DBG] from='client.? 192.168.123.103:0/95855360' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:20.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:20 vm05 bash[20070]: audit 2026-03-09T14:28:19.504878+0000 mon.a (mon.0) 804 : audit [DBG] from='client.? 192.168.123.103:0/3687075535' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:20.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:20 vm05 bash[20070]: audit 2026-03-09T14:28:19.504878+0000 mon.a (mon.0) 804 : audit [DBG] from='client.? 192.168.123.103:0/3687075535' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:20.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:20 vm05 bash[20070]: audit 2026-03-09T14:28:19.541905+0000 mon.a (mon.0) 805 : audit [DBG] from='client.? 192.168.123.103:0/298429480' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:20.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:20 vm05 bash[20070]: audit 2026-03-09T14:28:19.541905+0000 mon.a (mon.0) 805 : audit [DBG] from='client.? 192.168.123.103:0/298429480' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:20.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:20 vm05 bash[20070]: audit 2026-03-09T14:28:19.548256+0000 mon.a (mon.0) 806 : audit [DBG] from='client.? 192.168.123.103:0/4139324667' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:20.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:20 vm05 bash[20070]: audit 2026-03-09T14:28:19.548256+0000 mon.a (mon.0) 806 : audit [DBG] from='client.? 192.168.123.103:0/4139324667' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:20.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:20 vm05 bash[20070]: audit 2026-03-09T14:28:19.857410+0000 mon.a (mon.0) 807 : audit [DBG] from='client.? 192.168.123.103:0/3790709337' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:20.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:20 vm05 bash[20070]: audit 2026-03-09T14:28:19.857410+0000 mon.a (mon.0) 807 : audit [DBG] from='client.? 192.168.123.103:0/3790709337' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:20.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:20 vm05 bash[20070]: audit 2026-03-09T14:28:19.872488+0000 mon.a (mon.0) 808 : audit [DBG] from='client.? 192.168.123.103:0/644362766' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:20.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:20 vm05 bash[20070]: audit 2026-03-09T14:28:19.872488+0000 mon.a (mon.0) 808 : audit [DBG] from='client.? 192.168.123.103:0/644362766' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:20.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:20 vm05 bash[20070]: audit 2026-03-09T14:28:19.878846+0000 mon.a (mon.0) 809 : audit [DBG] from='client.? 192.168.123.103:0/882107634' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:20.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:20 vm05 bash[20070]: audit 2026-03-09T14:28:19.878846+0000 mon.a (mon.0) 809 : audit [DBG] from='client.? 192.168.123.103:0/882107634' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:20.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:20 vm05 bash[20070]: audit 2026-03-09T14:28:19.914215+0000 mon.a (mon.0) 810 : audit [DBG] from='client.? 192.168.123.103:0/1256693267' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:20.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:20 vm05 bash[20070]: audit 2026-03-09T14:28:19.914215+0000 mon.a (mon.0) 810 : audit [DBG] from='client.? 192.168.123.103:0/1256693267' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:20.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:20 vm05 bash[20070]: audit 2026-03-09T14:28:19.920203+0000 mon.a (mon.0) 811 : audit [DBG] from='client.? 192.168.123.103:0/4127295645' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:20.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:20 vm05 bash[20070]: audit 2026-03-09T14:28:19.920203+0000 mon.a (mon.0) 811 : audit [DBG] from='client.? 192.168.123.103:0/4127295645' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:20.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:20 vm05 bash[20070]: audit 2026-03-09T14:28:20.245855+0000 mon.a (mon.0) 812 : audit [DBG] from='client.? 192.168.123.103:0/2826029145' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:20.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:20 vm05 bash[20070]: audit 2026-03-09T14:28:20.245855+0000 mon.a (mon.0) 812 : audit [DBG] from='client.? 192.168.123.103:0/2826029145' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:20.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:20 vm05 bash[20070]: audit 2026-03-09T14:28:20.260956+0000 mon.a (mon.0) 813 : audit [DBG] from='client.? 192.168.123.103:0/2198191709' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:20.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:20 vm05 bash[20070]: audit 2026-03-09T14:28:20.260956+0000 mon.a (mon.0) 813 : audit [DBG] from='client.? 192.168.123.103:0/2198191709' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:20.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:20 vm05 bash[20070]: audit 2026-03-09T14:28:20.267887+0000 mon.a (mon.0) 814 : audit [DBG] from='client.? 192.168.123.103:0/2396542392' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:20.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:20 vm05 bash[20070]: audit 2026-03-09T14:28:20.267887+0000 mon.a (mon.0) 814 : audit [DBG] from='client.? 192.168.123.103:0/2396542392' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:20.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:20 vm05 bash[20070]: audit 2026-03-09T14:28:20.304129+0000 mon.a (mon.0) 815 : audit [DBG] from='client.? 192.168.123.103:0/2615562212' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:20.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:20 vm05 bash[20070]: audit 2026-03-09T14:28:20.304129+0000 mon.a (mon.0) 815 : audit [DBG] from='client.? 192.168.123.103:0/2615562212' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:20.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:20 vm05 bash[20070]: audit 2026-03-09T14:28:20.310275+0000 mon.a (mon.0) 816 : audit [DBG] from='client.? 192.168.123.103:0/2504084196' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:20.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:20 vm05 bash[20070]: audit 2026-03-09T14:28:20.310275+0000 mon.a (mon.0) 816 : audit [DBG] from='client.? 192.168.123.103:0/2504084196' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:21.007 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:20 vm03 bash[37744]: debug ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:20] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:21.007 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:20 vm03 bash[37744]: ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:20] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:21.300 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:21 vm03 bash[37744]: debug ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:21] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:21.300 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:21 vm03 bash[37744]: ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:21] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:21.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:21 vm04 bash[19581]: audit 2026-03-09T14:28:20.616612+0000 mon.a (mon.0) 817 : audit [DBG] from='client.? 192.168.123.103:0/1132676773' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:21.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:21 vm04 bash[19581]: audit 2026-03-09T14:28:20.616612+0000 mon.a (mon.0) 817 : audit [DBG] from='client.? 192.168.123.103:0/1132676773' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:21.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:21 vm04 bash[19581]: audit 2026-03-09T14:28:20.632119+0000 mon.b (mon.2) 34 : audit [DBG] from='client.? 192.168.123.103:0/2967081558' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:21.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:21 vm04 bash[19581]: audit 2026-03-09T14:28:20.632119+0000 mon.b (mon.2) 34 : audit [DBG] from='client.? 192.168.123.103:0/2967081558' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:21.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:21 vm04 bash[19581]: audit 2026-03-09T14:28:20.639281+0000 mon.a (mon.0) 818 : audit [DBG] from='client.? 192.168.123.103:0/2995755783' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:21.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:21 vm04 bash[19581]: audit 2026-03-09T14:28:20.639281+0000 mon.a (mon.0) 818 : audit [DBG] from='client.? 192.168.123.103:0/2995755783' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:21.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:21 vm04 bash[19581]: audit 2026-03-09T14:28:20.675430+0000 mon.c (mon.1) 35 : audit [DBG] from='client.? 192.168.123.103:0/2709759634' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:21.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:21 vm04 bash[19581]: audit 2026-03-09T14:28:20.675430+0000 mon.c (mon.1) 35 : audit [DBG] from='client.? 192.168.123.103:0/2709759634' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:21.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:21 vm04 bash[19581]: audit 2026-03-09T14:28:20.681062+0000 mon.b (mon.2) 35 : audit [DBG] from='client.? 192.168.123.103:0/1159459287' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:21.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:21 vm04 bash[19581]: audit 2026-03-09T14:28:20.681062+0000 mon.b (mon.2) 35 : audit [DBG] from='client.? 192.168.123.103:0/1159459287' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:21.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:21 vm04 bash[19581]: audit 2026-03-09T14:28:20.983895+0000 mon.b (mon.2) 36 : audit [DBG] from='client.? 192.168.123.103:0/3232225061' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:21.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:21 vm04 bash[19581]: audit 2026-03-09T14:28:20.983895+0000 mon.b (mon.2) 36 : audit [DBG] from='client.? 192.168.123.103:0/3232225061' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:21.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:21 vm04 bash[19581]: audit 2026-03-09T14:28:20.999501+0000 mon.a (mon.0) 819 : audit [DBG] from='client.? 192.168.123.103:0/2177689340' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:21.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:21 vm04 bash[19581]: audit 2026-03-09T14:28:20.999501+0000 mon.a (mon.0) 819 : audit [DBG] from='client.? 192.168.123.103:0/2177689340' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:21.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:21 vm04 bash[19581]: audit 2026-03-09T14:28:21.006020+0000 mon.b (mon.2) 37 : audit [DBG] from='client.? 192.168.123.103:0/1550806034' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:21.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:21 vm04 bash[19581]: audit 2026-03-09T14:28:21.006020+0000 mon.b (mon.2) 37 : audit [DBG] from='client.? 192.168.123.103:0/1550806034' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:21.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:21 vm04 bash[19581]: audit 2026-03-09T14:28:21.045175+0000 mon.a (mon.0) 820 : audit [DBG] from='client.? 192.168.123.103:0/205128665' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:21.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:21 vm04 bash[19581]: audit 2026-03-09T14:28:21.045175+0000 mon.a (mon.0) 820 : audit [DBG] from='client.? 192.168.123.103:0/205128665' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:21.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:21 vm04 bash[19581]: audit 2026-03-09T14:28:21.051060+0000 mon.b (mon.2) 38 : audit [DBG] from='client.? 192.168.123.103:0/29233580' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:21.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:21 vm04 bash[19581]: audit 2026-03-09T14:28:21.051060+0000 mon.b (mon.2) 38 : audit [DBG] from='client.? 192.168.123.103:0/29233580' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:21.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:21 vm04 bash[19581]: audit 2026-03-09T14:28:21.361839+0000 mon.b (mon.2) 39 : audit [DBG] from='client.? 192.168.123.103:0/3797725682' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:21.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:21 vm04 bash[19581]: audit 2026-03-09T14:28:21.361839+0000 mon.b (mon.2) 39 : audit [DBG] from='client.? 192.168.123.103:0/3797725682' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:21.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:21 vm04 bash[19581]: audit 2026-03-09T14:28:21.377595+0000 mon.c (mon.1) 36 : audit [DBG] from='client.? 192.168.123.103:0/2624802519' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:21.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:21 vm04 bash[19581]: audit 2026-03-09T14:28:21.377595+0000 mon.c (mon.1) 36 : audit [DBG] from='client.? 192.168.123.103:0/2624802519' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:21.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:21 vm04 bash[19581]: audit 2026-03-09T14:28:21.383366+0000 mon.a (mon.0) 821 : audit [DBG] from='client.? 192.168.123.103:0/846631546' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:21.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:21 vm04 bash[19581]: audit 2026-03-09T14:28:21.383366+0000 mon.a (mon.0) 821 : audit [DBG] from='client.? 192.168.123.103:0/846631546' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:21.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:21 vm05 bash[20070]: audit 2026-03-09T14:28:20.616612+0000 mon.a (mon.0) 817 : audit [DBG] from='client.? 192.168.123.103:0/1132676773' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:21.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:21 vm05 bash[20070]: audit 2026-03-09T14:28:20.616612+0000 mon.a (mon.0) 817 : audit [DBG] from='client.? 192.168.123.103:0/1132676773' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:21.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:21 vm05 bash[20070]: audit 2026-03-09T14:28:20.632119+0000 mon.b (mon.2) 34 : audit [DBG] from='client.? 192.168.123.103:0/2967081558' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:21.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:21 vm05 bash[20070]: audit 2026-03-09T14:28:20.632119+0000 mon.b (mon.2) 34 : audit [DBG] from='client.? 192.168.123.103:0/2967081558' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:21.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:21 vm05 bash[20070]: audit 2026-03-09T14:28:20.639281+0000 mon.a (mon.0) 818 : audit [DBG] from='client.? 192.168.123.103:0/2995755783' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:21.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:21 vm05 bash[20070]: audit 2026-03-09T14:28:20.639281+0000 mon.a (mon.0) 818 : audit [DBG] from='client.? 192.168.123.103:0/2995755783' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:21.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:21 vm05 bash[20070]: audit 2026-03-09T14:28:20.675430+0000 mon.c (mon.1) 35 : audit [DBG] from='client.? 192.168.123.103:0/2709759634' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:21.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:21 vm05 bash[20070]: audit 2026-03-09T14:28:20.675430+0000 mon.c (mon.1) 35 : audit [DBG] from='client.? 192.168.123.103:0/2709759634' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:21.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:21 vm05 bash[20070]: audit 2026-03-09T14:28:20.681062+0000 mon.b (mon.2) 35 : audit [DBG] from='client.? 192.168.123.103:0/1159459287' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:21.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:21 vm05 bash[20070]: audit 2026-03-09T14:28:20.681062+0000 mon.b (mon.2) 35 : audit [DBG] from='client.? 192.168.123.103:0/1159459287' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:21.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:21 vm05 bash[20070]: audit 2026-03-09T14:28:20.983895+0000 mon.b (mon.2) 36 : audit [DBG] from='client.? 192.168.123.103:0/3232225061' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:21.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:21 vm05 bash[20070]: audit 2026-03-09T14:28:20.983895+0000 mon.b (mon.2) 36 : audit [DBG] from='client.? 192.168.123.103:0/3232225061' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:21.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:21 vm05 bash[20070]: audit 2026-03-09T14:28:20.999501+0000 mon.a (mon.0) 819 : audit [DBG] from='client.? 192.168.123.103:0/2177689340' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:21.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:21 vm05 bash[20070]: audit 2026-03-09T14:28:20.999501+0000 mon.a (mon.0) 819 : audit [DBG] from='client.? 192.168.123.103:0/2177689340' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:21.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:21 vm05 bash[20070]: audit 2026-03-09T14:28:21.006020+0000 mon.b (mon.2) 37 : audit [DBG] from='client.? 192.168.123.103:0/1550806034' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:21.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:21 vm05 bash[20070]: audit 2026-03-09T14:28:21.006020+0000 mon.b (mon.2) 37 : audit [DBG] from='client.? 192.168.123.103:0/1550806034' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:21.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:21 vm05 bash[20070]: audit 2026-03-09T14:28:21.045175+0000 mon.a (mon.0) 820 : audit [DBG] from='client.? 192.168.123.103:0/205128665' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:21.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:21 vm05 bash[20070]: audit 2026-03-09T14:28:21.045175+0000 mon.a (mon.0) 820 : audit [DBG] from='client.? 192.168.123.103:0/205128665' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:21.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:21 vm05 bash[20070]: audit 2026-03-09T14:28:21.051060+0000 mon.b (mon.2) 38 : audit [DBG] from='client.? 192.168.123.103:0/29233580' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:21.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:21 vm05 bash[20070]: audit 2026-03-09T14:28:21.051060+0000 mon.b (mon.2) 38 : audit [DBG] from='client.? 192.168.123.103:0/29233580' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:21.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:21 vm05 bash[20070]: audit 2026-03-09T14:28:21.361839+0000 mon.b (mon.2) 39 : audit [DBG] from='client.? 192.168.123.103:0/3797725682' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:21.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:21 vm05 bash[20070]: audit 2026-03-09T14:28:21.361839+0000 mon.b (mon.2) 39 : audit [DBG] from='client.? 192.168.123.103:0/3797725682' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:21.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:21 vm05 bash[20070]: audit 2026-03-09T14:28:21.377595+0000 mon.c (mon.1) 36 : audit [DBG] from='client.? 192.168.123.103:0/2624802519' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:21.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:21 vm05 bash[20070]: audit 2026-03-09T14:28:21.377595+0000 mon.c (mon.1) 36 : audit [DBG] from='client.? 192.168.123.103:0/2624802519' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:21.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:21 vm05 bash[20070]: audit 2026-03-09T14:28:21.383366+0000 mon.a (mon.0) 821 : audit [DBG] from='client.? 192.168.123.103:0/846631546' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:21.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:21 vm05 bash[20070]: audit 2026-03-09T14:28:21.383366+0000 mon.a (mon.0) 821 : audit [DBG] from='client.? 192.168.123.103:0/846631546' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:21.758 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:21 vm03 bash[37744]: debug ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:21] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:21.758 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:21 vm03 bash[37744]: ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:21] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:21.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:21 vm03 bash[17524]: audit 2026-03-09T14:28:20.616612+0000 mon.a (mon.0) 817 : audit [DBG] from='client.? 192.168.123.103:0/1132676773' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:21.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:21 vm03 bash[17524]: audit 2026-03-09T14:28:20.616612+0000 mon.a (mon.0) 817 : audit [DBG] from='client.? 192.168.123.103:0/1132676773' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:21.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:21 vm03 bash[17524]: audit 2026-03-09T14:28:20.632119+0000 mon.b (mon.2) 34 : audit [DBG] from='client.? 192.168.123.103:0/2967081558' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:21.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:21 vm03 bash[17524]: audit 2026-03-09T14:28:20.632119+0000 mon.b (mon.2) 34 : audit [DBG] from='client.? 192.168.123.103:0/2967081558' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:21.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:21 vm03 bash[17524]: audit 2026-03-09T14:28:20.639281+0000 mon.a (mon.0) 818 : audit [DBG] from='client.? 192.168.123.103:0/2995755783' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:21.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:21 vm03 bash[17524]: audit 2026-03-09T14:28:20.639281+0000 mon.a (mon.0) 818 : audit [DBG] from='client.? 192.168.123.103:0/2995755783' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:21.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:21 vm03 bash[17524]: audit 2026-03-09T14:28:20.675430+0000 mon.c (mon.1) 35 : audit [DBG] from='client.? 192.168.123.103:0/2709759634' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:21.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:21 vm03 bash[17524]: audit 2026-03-09T14:28:20.675430+0000 mon.c (mon.1) 35 : audit [DBG] from='client.? 192.168.123.103:0/2709759634' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:21.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:21 vm03 bash[17524]: audit 2026-03-09T14:28:20.681062+0000 mon.b (mon.2) 35 : audit [DBG] from='client.? 192.168.123.103:0/1159459287' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:21.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:21 vm03 bash[17524]: audit 2026-03-09T14:28:20.681062+0000 mon.b (mon.2) 35 : audit [DBG] from='client.? 192.168.123.103:0/1159459287' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:21.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:21 vm03 bash[17524]: audit 2026-03-09T14:28:20.983895+0000 mon.b (mon.2) 36 : audit [DBG] from='client.? 192.168.123.103:0/3232225061' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:21.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:21 vm03 bash[17524]: audit 2026-03-09T14:28:20.983895+0000 mon.b (mon.2) 36 : audit [DBG] from='client.? 192.168.123.103:0/3232225061' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:21.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:21 vm03 bash[17524]: audit 2026-03-09T14:28:20.999501+0000 mon.a (mon.0) 819 : audit [DBG] from='client.? 192.168.123.103:0/2177689340' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:21.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:21 vm03 bash[17524]: audit 2026-03-09T14:28:20.999501+0000 mon.a (mon.0) 819 : audit [DBG] from='client.? 192.168.123.103:0/2177689340' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:21.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:21 vm03 bash[17524]: audit 2026-03-09T14:28:21.006020+0000 mon.b (mon.2) 37 : audit [DBG] from='client.? 192.168.123.103:0/1550806034' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:21.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:21 vm03 bash[17524]: audit 2026-03-09T14:28:21.006020+0000 mon.b (mon.2) 37 : audit [DBG] from='client.? 192.168.123.103:0/1550806034' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:21.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:21 vm03 bash[17524]: audit 2026-03-09T14:28:21.045175+0000 mon.a (mon.0) 820 : audit [DBG] from='client.? 192.168.123.103:0/205128665' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:21.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:21 vm03 bash[17524]: audit 2026-03-09T14:28:21.045175+0000 mon.a (mon.0) 820 : audit [DBG] from='client.? 192.168.123.103:0/205128665' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:21.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:21 vm03 bash[17524]: audit 2026-03-09T14:28:21.051060+0000 mon.b (mon.2) 38 : audit [DBG] from='client.? 192.168.123.103:0/29233580' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:21.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:21 vm03 bash[17524]: audit 2026-03-09T14:28:21.051060+0000 mon.b (mon.2) 38 : audit [DBG] from='client.? 192.168.123.103:0/29233580' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:21.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:21 vm03 bash[17524]: audit 2026-03-09T14:28:21.361839+0000 mon.b (mon.2) 39 : audit [DBG] from='client.? 192.168.123.103:0/3797725682' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:21.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:21 vm03 bash[17524]: audit 2026-03-09T14:28:21.361839+0000 mon.b (mon.2) 39 : audit [DBG] from='client.? 192.168.123.103:0/3797725682' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:21.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:21 vm03 bash[17524]: audit 2026-03-09T14:28:21.377595+0000 mon.c (mon.1) 36 : audit [DBG] from='client.? 192.168.123.103:0/2624802519' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:21.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:21 vm03 bash[17524]: audit 2026-03-09T14:28:21.377595+0000 mon.c (mon.1) 36 : audit [DBG] from='client.? 192.168.123.103:0/2624802519' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:21.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:21 vm03 bash[17524]: audit 2026-03-09T14:28:21.383366+0000 mon.a (mon.0) 821 : audit [DBG] from='client.? 192.168.123.103:0/846631546' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:21.759 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:21 vm03 bash[17524]: audit 2026-03-09T14:28:21.383366+0000 mon.a (mon.0) 821 : audit [DBG] from='client.? 192.168.123.103:0/846631546' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:22.050 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:21 vm03 bash[37744]: debug ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:21] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:22.051 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:21 vm03 bash[37744]: ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:21] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:22.407 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:22 vm03 bash[37744]: debug ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:22] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:22.407 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:22 vm03 bash[37744]: ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:22] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:22.514 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:22 vm03 bash[17524]: cluster 2026-03-09T14:28:20.996241+0000 mgr.x (mgr.14150) 500 : cluster [DBG] pgmap v396: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 4.3 KiB/s rd, 341 B/s wr, 6 op/s 2026-03-09T14:28:22.514 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:22 vm03 bash[17524]: cluster 2026-03-09T14:28:20.996241+0000 mgr.x (mgr.14150) 500 : cluster [DBG] pgmap v396: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 4.3 KiB/s rd, 341 B/s wr, 6 op/s 2026-03-09T14:28:22.514 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:22 vm03 bash[17524]: audit 2026-03-09T14:28:21.419983+0000 mon.b (mon.2) 40 : audit [DBG] from='client.? 192.168.123.103:0/2750134627' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:22.514 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:22 vm03 bash[17524]: audit 2026-03-09T14:28:21.419983+0000 mon.b (mon.2) 40 : audit [DBG] from='client.? 192.168.123.103:0/2750134627' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:22.514 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:22 vm03 bash[17524]: audit 2026-03-09T14:28:21.425976+0000 mon.b (mon.2) 41 : audit [DBG] from='client.? 192.168.123.103:0/2188034253' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:22.514 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:22 vm03 bash[17524]: audit 2026-03-09T14:28:21.425976+0000 mon.b (mon.2) 41 : audit [DBG] from='client.? 192.168.123.103:0/2188034253' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:22.514 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:22 vm03 bash[17524]: audit 2026-03-09T14:28:21.736721+0000 mon.a (mon.0) 822 : audit [DBG] from='client.? 192.168.123.103:0/76095283' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:22.514 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:22 vm03 bash[17524]: audit 2026-03-09T14:28:21.736721+0000 mon.a (mon.0) 822 : audit [DBG] from='client.? 192.168.123.103:0/76095283' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:22.514 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:22 vm03 bash[17524]: audit 2026-03-09T14:28:21.751733+0000 mon.c (mon.1) 37 : audit [DBG] from='client.? 192.168.123.103:0/2130464964' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:22.514 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:22 vm03 bash[17524]: audit 2026-03-09T14:28:21.751733+0000 mon.c (mon.1) 37 : audit [DBG] from='client.? 192.168.123.103:0/2130464964' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:22.514 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:22 vm03 bash[17524]: audit 2026-03-09T14:28:21.757492+0000 mon.a (mon.0) 823 : audit [DBG] from='client.? 192.168.123.103:0/3393079844' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:22.514 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:22 vm03 bash[17524]: audit 2026-03-09T14:28:21.757492+0000 mon.a (mon.0) 823 : audit [DBG] from='client.? 192.168.123.103:0/3393079844' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:22.514 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:22 vm03 bash[17524]: audit 2026-03-09T14:28:21.793846+0000 mon.a (mon.0) 824 : audit [DBG] from='client.? 192.168.123.103:0/3310863829' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:22.514 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:22 vm03 bash[17524]: audit 2026-03-09T14:28:21.793846+0000 mon.a (mon.0) 824 : audit [DBG] from='client.? 192.168.123.103:0/3310863829' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:22.514 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:22 vm03 bash[17524]: audit 2026-03-09T14:28:21.799198+0000 mon.b (mon.2) 42 : audit [DBG] from='client.? 192.168.123.103:0/2502052350' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:22.514 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:22 vm03 bash[17524]: audit 2026-03-09T14:28:21.799198+0000 mon.b (mon.2) 42 : audit [DBG] from='client.? 192.168.123.103:0/2502052350' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:22.514 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:22 vm03 bash[17524]: audit 2026-03-09T14:28:22.101316+0000 mon.a (mon.0) 825 : audit [DBG] from='client.? 192.168.123.103:0/747976532' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:22.514 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:22 vm03 bash[17524]: audit 2026-03-09T14:28:22.101316+0000 mon.a (mon.0) 825 : audit [DBG] from='client.? 192.168.123.103:0/747976532' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:22.514 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:22 vm03 bash[17524]: audit 2026-03-09T14:28:22.116187+0000 mon.a (mon.0) 826 : audit [DBG] from='client.? 192.168.123.103:0/4049019892' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:22.514 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:22 vm03 bash[17524]: audit 2026-03-09T14:28:22.116187+0000 mon.a (mon.0) 826 : audit [DBG] from='client.? 192.168.123.103:0/4049019892' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:22.514 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:22 vm03 bash[17524]: audit 2026-03-09T14:28:22.122820+0000 mon.a (mon.0) 827 : audit [DBG] from='client.? 192.168.123.103:0/1490749791' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:22.514 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:22 vm03 bash[17524]: audit 2026-03-09T14:28:22.122820+0000 mon.a (mon.0) 827 : audit [DBG] from='client.? 192.168.123.103:0/1490749791' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:22.514 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:22 vm03 bash[17524]: audit 2026-03-09T14:28:22.160650+0000 mon.a (mon.0) 828 : audit [DBG] from='client.? 192.168.123.103:0/4264437116' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:22.514 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:22 vm03 bash[17524]: audit 2026-03-09T14:28:22.160650+0000 mon.a (mon.0) 828 : audit [DBG] from='client.? 192.168.123.103:0/4264437116' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:22.514 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:22 vm03 bash[17524]: audit 2026-03-09T14:28:22.166685+0000 mon.a (mon.0) 829 : audit [DBG] from='client.? 192.168.123.103:0/2781584684' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:22.514 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:22 vm03 bash[17524]: audit 2026-03-09T14:28:22.166685+0000 mon.a (mon.0) 829 : audit [DBG] from='client.? 192.168.123.103:0/2781584684' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:22.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:22 vm04 bash[19581]: cluster 2026-03-09T14:28:20.996241+0000 mgr.x (mgr.14150) 500 : cluster [DBG] pgmap v396: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 4.3 KiB/s rd, 341 B/s wr, 6 op/s 2026-03-09T14:28:22.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:22 vm04 bash[19581]: cluster 2026-03-09T14:28:20.996241+0000 mgr.x (mgr.14150) 500 : cluster [DBG] pgmap v396: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 4.3 KiB/s rd, 341 B/s wr, 6 op/s 2026-03-09T14:28:22.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:22 vm04 bash[19581]: audit 2026-03-09T14:28:21.419983+0000 mon.b (mon.2) 40 : audit [DBG] from='client.? 192.168.123.103:0/2750134627' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:22.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:22 vm04 bash[19581]: audit 2026-03-09T14:28:21.419983+0000 mon.b (mon.2) 40 : audit [DBG] from='client.? 192.168.123.103:0/2750134627' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:22.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:22 vm04 bash[19581]: audit 2026-03-09T14:28:21.425976+0000 mon.b (mon.2) 41 : audit [DBG] from='client.? 192.168.123.103:0/2188034253' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:22.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:22 vm04 bash[19581]: audit 2026-03-09T14:28:21.425976+0000 mon.b (mon.2) 41 : audit [DBG] from='client.? 192.168.123.103:0/2188034253' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:22.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:22 vm04 bash[19581]: audit 2026-03-09T14:28:21.736721+0000 mon.a (mon.0) 822 : audit [DBG] from='client.? 192.168.123.103:0/76095283' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:22.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:22 vm04 bash[19581]: audit 2026-03-09T14:28:21.736721+0000 mon.a (mon.0) 822 : audit [DBG] from='client.? 192.168.123.103:0/76095283' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:22.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:22 vm04 bash[19581]: audit 2026-03-09T14:28:21.751733+0000 mon.c (mon.1) 37 : audit [DBG] from='client.? 192.168.123.103:0/2130464964' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:22.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:22 vm04 bash[19581]: audit 2026-03-09T14:28:21.751733+0000 mon.c (mon.1) 37 : audit [DBG] from='client.? 192.168.123.103:0/2130464964' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:22.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:22 vm04 bash[19581]: audit 2026-03-09T14:28:21.757492+0000 mon.a (mon.0) 823 : audit [DBG] from='client.? 192.168.123.103:0/3393079844' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:22.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:22 vm04 bash[19581]: audit 2026-03-09T14:28:21.757492+0000 mon.a (mon.0) 823 : audit [DBG] from='client.? 192.168.123.103:0/3393079844' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:22.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:22 vm04 bash[19581]: audit 2026-03-09T14:28:21.793846+0000 mon.a (mon.0) 824 : audit [DBG] from='client.? 192.168.123.103:0/3310863829' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:22.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:22 vm04 bash[19581]: audit 2026-03-09T14:28:21.793846+0000 mon.a (mon.0) 824 : audit [DBG] from='client.? 192.168.123.103:0/3310863829' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:22.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:22 vm04 bash[19581]: audit 2026-03-09T14:28:21.799198+0000 mon.b (mon.2) 42 : audit [DBG] from='client.? 192.168.123.103:0/2502052350' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:22.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:22 vm04 bash[19581]: audit 2026-03-09T14:28:21.799198+0000 mon.b (mon.2) 42 : audit [DBG] from='client.? 192.168.123.103:0/2502052350' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:22.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:22 vm04 bash[19581]: audit 2026-03-09T14:28:22.101316+0000 mon.a (mon.0) 825 : audit [DBG] from='client.? 192.168.123.103:0/747976532' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:22.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:22 vm04 bash[19581]: audit 2026-03-09T14:28:22.101316+0000 mon.a (mon.0) 825 : audit [DBG] from='client.? 192.168.123.103:0/747976532' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:22.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:22 vm04 bash[19581]: audit 2026-03-09T14:28:22.116187+0000 mon.a (mon.0) 826 : audit [DBG] from='client.? 192.168.123.103:0/4049019892' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:22.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:22 vm04 bash[19581]: audit 2026-03-09T14:28:22.116187+0000 mon.a (mon.0) 826 : audit [DBG] from='client.? 192.168.123.103:0/4049019892' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:22.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:22 vm04 bash[19581]: audit 2026-03-09T14:28:22.122820+0000 mon.a (mon.0) 827 : audit [DBG] from='client.? 192.168.123.103:0/1490749791' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:22.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:22 vm04 bash[19581]: audit 2026-03-09T14:28:22.122820+0000 mon.a (mon.0) 827 : audit [DBG] from='client.? 192.168.123.103:0/1490749791' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:22.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:22 vm04 bash[19581]: audit 2026-03-09T14:28:22.160650+0000 mon.a (mon.0) 828 : audit [DBG] from='client.? 192.168.123.103:0/4264437116' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:22.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:22 vm04 bash[19581]: audit 2026-03-09T14:28:22.160650+0000 mon.a (mon.0) 828 : audit [DBG] from='client.? 192.168.123.103:0/4264437116' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:22.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:22 vm04 bash[19581]: audit 2026-03-09T14:28:22.166685+0000 mon.a (mon.0) 829 : audit [DBG] from='client.? 192.168.123.103:0/2781584684' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:22.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:22 vm04 bash[19581]: audit 2026-03-09T14:28:22.166685+0000 mon.a (mon.0) 829 : audit [DBG] from='client.? 192.168.123.103:0/2781584684' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:22.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:22 vm05 bash[20070]: cluster 2026-03-09T14:28:20.996241+0000 mgr.x (mgr.14150) 500 : cluster [DBG] pgmap v396: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 4.3 KiB/s rd, 341 B/s wr, 6 op/s 2026-03-09T14:28:22.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:22 vm05 bash[20070]: cluster 2026-03-09T14:28:20.996241+0000 mgr.x (mgr.14150) 500 : cluster [DBG] pgmap v396: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 4.3 KiB/s rd, 341 B/s wr, 6 op/s 2026-03-09T14:28:22.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:22 vm05 bash[20070]: audit 2026-03-09T14:28:21.419983+0000 mon.b (mon.2) 40 : audit [DBG] from='client.? 192.168.123.103:0/2750134627' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:22.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:22 vm05 bash[20070]: audit 2026-03-09T14:28:21.419983+0000 mon.b (mon.2) 40 : audit [DBG] from='client.? 192.168.123.103:0/2750134627' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:22.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:22 vm05 bash[20070]: audit 2026-03-09T14:28:21.425976+0000 mon.b (mon.2) 41 : audit [DBG] from='client.? 192.168.123.103:0/2188034253' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:22.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:22 vm05 bash[20070]: audit 2026-03-09T14:28:21.425976+0000 mon.b (mon.2) 41 : audit [DBG] from='client.? 192.168.123.103:0/2188034253' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:22.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:22 vm05 bash[20070]: audit 2026-03-09T14:28:21.736721+0000 mon.a (mon.0) 822 : audit [DBG] from='client.? 192.168.123.103:0/76095283' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:22.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:22 vm05 bash[20070]: audit 2026-03-09T14:28:21.736721+0000 mon.a (mon.0) 822 : audit [DBG] from='client.? 192.168.123.103:0/76095283' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:22.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:22 vm05 bash[20070]: audit 2026-03-09T14:28:21.751733+0000 mon.c (mon.1) 37 : audit [DBG] from='client.? 192.168.123.103:0/2130464964' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:22.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:22 vm05 bash[20070]: audit 2026-03-09T14:28:21.751733+0000 mon.c (mon.1) 37 : audit [DBG] from='client.? 192.168.123.103:0/2130464964' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:22.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:22 vm05 bash[20070]: audit 2026-03-09T14:28:21.757492+0000 mon.a (mon.0) 823 : audit [DBG] from='client.? 192.168.123.103:0/3393079844' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:22.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:22 vm05 bash[20070]: audit 2026-03-09T14:28:21.757492+0000 mon.a (mon.0) 823 : audit [DBG] from='client.? 192.168.123.103:0/3393079844' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:22.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:22 vm05 bash[20070]: audit 2026-03-09T14:28:21.793846+0000 mon.a (mon.0) 824 : audit [DBG] from='client.? 192.168.123.103:0/3310863829' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:22.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:22 vm05 bash[20070]: audit 2026-03-09T14:28:21.793846+0000 mon.a (mon.0) 824 : audit [DBG] from='client.? 192.168.123.103:0/3310863829' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:22.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:22 vm05 bash[20070]: audit 2026-03-09T14:28:21.799198+0000 mon.b (mon.2) 42 : audit [DBG] from='client.? 192.168.123.103:0/2502052350' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:22.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:22 vm05 bash[20070]: audit 2026-03-09T14:28:21.799198+0000 mon.b (mon.2) 42 : audit [DBG] from='client.? 192.168.123.103:0/2502052350' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:22.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:22 vm05 bash[20070]: audit 2026-03-09T14:28:22.101316+0000 mon.a (mon.0) 825 : audit [DBG] from='client.? 192.168.123.103:0/747976532' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:22.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:22 vm05 bash[20070]: audit 2026-03-09T14:28:22.101316+0000 mon.a (mon.0) 825 : audit [DBG] from='client.? 192.168.123.103:0/747976532' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:22.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:22 vm05 bash[20070]: audit 2026-03-09T14:28:22.116187+0000 mon.a (mon.0) 826 : audit [DBG] from='client.? 192.168.123.103:0/4049019892' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:22.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:22 vm05 bash[20070]: audit 2026-03-09T14:28:22.116187+0000 mon.a (mon.0) 826 : audit [DBG] from='client.? 192.168.123.103:0/4049019892' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:22.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:22 vm05 bash[20070]: audit 2026-03-09T14:28:22.122820+0000 mon.a (mon.0) 827 : audit [DBG] from='client.? 192.168.123.103:0/1490749791' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:22.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:22 vm05 bash[20070]: audit 2026-03-09T14:28:22.122820+0000 mon.a (mon.0) 827 : audit [DBG] from='client.? 192.168.123.103:0/1490749791' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:22.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:22 vm05 bash[20070]: audit 2026-03-09T14:28:22.160650+0000 mon.a (mon.0) 828 : audit [DBG] from='client.? 192.168.123.103:0/4264437116' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:22.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:22 vm05 bash[20070]: audit 2026-03-09T14:28:22.160650+0000 mon.a (mon.0) 828 : audit [DBG] from='client.? 192.168.123.103:0/4264437116' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:22.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:22 vm05 bash[20070]: audit 2026-03-09T14:28:22.166685+0000 mon.a (mon.0) 829 : audit [DBG] from='client.? 192.168.123.103:0/2781584684' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:22.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:22 vm05 bash[20070]: audit 2026-03-09T14:28:22.166685+0000 mon.a (mon.0) 829 : audit [DBG] from='client.? 192.168.123.103:0/2781584684' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:22.801 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:22 vm03 bash[37744]: debug ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:22] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:22.801 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:22 vm03 bash[37744]: ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:22] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:23.269 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:22 vm03 bash[37744]: debug ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:22] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:23.269 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:22 vm03 bash[37744]: ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:22] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:23.550 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:23 vm03 bash[37744]: debug ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:23] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:23.550 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:23 vm03 bash[37744]: ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:23] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:23.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:23 vm03 bash[17524]: audit 2026-03-09T14:28:22.490054+0000 mon.a (mon.0) 830 : audit [DBG] from='client.? 192.168.123.103:0/853831692' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:23.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:23 vm03 bash[17524]: audit 2026-03-09T14:28:22.490054+0000 mon.a (mon.0) 830 : audit [DBG] from='client.? 192.168.123.103:0/853831692' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:23.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:23 vm03 bash[17524]: audit 2026-03-09T14:28:22.506345+0000 mon.a (mon.0) 831 : audit [DBG] from='client.? 192.168.123.103:0/1365238852' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:23.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:23 vm03 bash[17524]: audit 2026-03-09T14:28:22.506345+0000 mon.a (mon.0) 831 : audit [DBG] from='client.? 192.168.123.103:0/1365238852' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:23.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:23 vm03 bash[17524]: audit 2026-03-09T14:28:22.513248+0000 mon.c (mon.1) 38 : audit [DBG] from='client.? 192.168.123.103:0/1854697985' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:23.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:23 vm03 bash[17524]: audit 2026-03-09T14:28:22.513248+0000 mon.c (mon.1) 38 : audit [DBG] from='client.? 192.168.123.103:0/1854697985' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:23.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:23 vm03 bash[17524]: audit 2026-03-09T14:28:22.550224+0000 mon.c (mon.1) 39 : audit [DBG] from='client.? 192.168.123.103:0/3992601671' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:23.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:23 vm03 bash[17524]: audit 2026-03-09T14:28:22.550224+0000 mon.c (mon.1) 39 : audit [DBG] from='client.? 192.168.123.103:0/3992601671' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:23.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:23 vm03 bash[17524]: audit 2026-03-09T14:28:22.556404+0000 mon.c (mon.1) 40 : audit [DBG] from='client.? 192.168.123.103:0/726917066' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:23.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:23 vm03 bash[17524]: audit 2026-03-09T14:28:22.556404+0000 mon.c (mon.1) 40 : audit [DBG] from='client.? 192.168.123.103:0/726917066' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:23.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:23 vm03 bash[17524]: audit 2026-03-09T14:28:22.872811+0000 mon.b (mon.2) 43 : audit [DBG] from='client.? 192.168.123.103:0/4274054927' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:23.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:23 vm03 bash[17524]: audit 2026-03-09T14:28:22.872811+0000 mon.b (mon.2) 43 : audit [DBG] from='client.? 192.168.123.103:0/4274054927' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:23.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:23 vm03 bash[17524]: audit 2026-03-09T14:28:22.888781+0000 mon.b (mon.2) 44 : audit [DBG] from='client.? 192.168.123.103:0/3298154411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:23.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:23 vm03 bash[17524]: audit 2026-03-09T14:28:22.888781+0000 mon.b (mon.2) 44 : audit [DBG] from='client.? 192.168.123.103:0/3298154411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:23.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:23 vm03 bash[17524]: audit 2026-03-09T14:28:22.896493+0000 mon.c (mon.1) 41 : audit [DBG] from='client.? 192.168.123.103:0/732561075' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:23.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:23 vm03 bash[17524]: audit 2026-03-09T14:28:22.896493+0000 mon.c (mon.1) 41 : audit [DBG] from='client.? 192.168.123.103:0/732561075' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:23.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:23 vm03 bash[17524]: audit 2026-03-09T14:28:22.933343+0000 mon.b (mon.2) 45 : audit [DBG] from='client.? 192.168.123.103:0/3818660876' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:23.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:23 vm03 bash[17524]: audit 2026-03-09T14:28:22.933343+0000 mon.b (mon.2) 45 : audit [DBG] from='client.? 192.168.123.103:0/3818660876' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:23.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:23 vm03 bash[17524]: audit 2026-03-09T14:28:22.939345+0000 mon.b (mon.2) 46 : audit [DBG] from='client.? 192.168.123.103:0/4232580951' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:23.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:23 vm03 bash[17524]: audit 2026-03-09T14:28:22.939345+0000 mon.b (mon.2) 46 : audit [DBG] from='client.? 192.168.123.103:0/4232580951' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:23.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:23 vm03 bash[17524]: audit 2026-03-09T14:28:23.246063+0000 mon.a (mon.0) 832 : audit [DBG] from='client.? 192.168.123.103:0/2990659624' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:23.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:23 vm03 bash[17524]: audit 2026-03-09T14:28:23.246063+0000 mon.a (mon.0) 832 : audit [DBG] from='client.? 192.168.123.103:0/2990659624' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:23.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:23 vm03 bash[17524]: audit 2026-03-09T14:28:23.261464+0000 mon.c (mon.1) 42 : audit [DBG] from='client.? 192.168.123.103:0/627664950' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:23.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:23 vm03 bash[17524]: audit 2026-03-09T14:28:23.261464+0000 mon.c (mon.1) 42 : audit [DBG] from='client.? 192.168.123.103:0/627664950' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:23.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:23 vm03 bash[17524]: audit 2026-03-09T14:28:23.267696+0000 mon.a (mon.0) 833 : audit [DBG] from='client.? 192.168.123.103:0/3796813018' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:23.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:23 vm03 bash[17524]: audit 2026-03-09T14:28:23.267696+0000 mon.a (mon.0) 833 : audit [DBG] from='client.? 192.168.123.103:0/3796813018' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:23.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:23 vm03 bash[17524]: audit 2026-03-09T14:28:23.303475+0000 mon.a (mon.0) 834 : audit [DBG] from='client.? 192.168.123.103:0/146878642' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:23.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:23 vm03 bash[17524]: audit 2026-03-09T14:28:23.303475+0000 mon.a (mon.0) 834 : audit [DBG] from='client.? 192.168.123.103:0/146878642' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:23.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:23 vm03 bash[17524]: audit 2026-03-09T14:28:23.309100+0000 mon.a (mon.0) 835 : audit [DBG] from='client.? 192.168.123.103:0/1911457727' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:23.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:23 vm03 bash[17524]: audit 2026-03-09T14:28:23.309100+0000 mon.a (mon.0) 835 : audit [DBG] from='client.? 192.168.123.103:0/1911457727' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:23.719 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T14:28:23.719 INFO:tasks.cram.client.0.vm03.stdout:/home/ubuntu/cephtest/archive/cram.client.0/gwcli_create.t: failed 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout:--- gwcli_create.t 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout:+++ gwcli_create.t.err 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout:@@ -17,35 +17,29 @@ 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout: ============================= 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout: $ sudo $CENGINE exec $ISCSI_CONTAINER gwcli disks/ create pool=datapool image=block0 size=300M wwn=36001405da17b74481464e9fa968746d3 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout: $ sudo $CENGINE exec $ISCSI_CONTAINER gwcli ls disks/ | grep 'o- disks' | awk -F'[' '{print $2}' 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout:- 300M, Disks: 1] 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout:+ 0.00Y, Disks: 0] 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout: $ sudo $CENGINE exec $ISCSI_CONTAINER gwcli ls disks/ | grep 'o- datapool' | awk -F'[' '{print $2}' 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout:- datapool (300M)] 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout: $ sudo $CENGINE exec $ISCSI_CONTAINER gwcli ls disks/ | grep 'o- block0' | awk -F'[' '{print $2}' 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout:- datapool/block0 (Unknown, 300M)] 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout: 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout: Create the target IQN 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout: ===================== 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout: $ sudo $CENGINE exec $ISCSI_CONTAINER gwcli iscsi-targets/ create target_iqn=iqn.2003-01.com.redhat.iscsi-gw:ceph-gw 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout: $ sudo $CENGINE exec $ISCSI_CONTAINER gwcli ls iscsi-targets/ | grep 'o- iscsi-targets' | awk -F'[' '{print $2}' 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout:- DiscoveryAuth: None, Targets: 1] 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout:+ DiscoveryAuth: None, Targets: 0] 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout: $ sudo $CENGINE exec $ISCSI_CONTAINER gwcli ls iscsi-targets/ | grep 'o- iqn.2003-01.com.redhat.iscsi-gw:ceph-gw' | awk -F'[' '{print $2}' 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout:- Auth: None, Gateways: 0] 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout: $ sudo $CENGINE exec $ISCSI_CONTAINER gwcli ls iscsi-targets/ | grep 'o- disks' | awk -F'[' '{print $2}' 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout:- Disks: 0] 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout: $ sudo $CENGINE exec $ISCSI_CONTAINER gwcli ls iscsi-targets/ | grep 'o- gateways' | awk -F'[' '{print $2}' 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout:- Up: 0/0, Portals: 0] 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout: $ sudo $CENGINE exec $ISCSI_CONTAINER gwcli ls iscsi-targets/ | grep 'o- host-groups' | awk -F'[' '{print $2}' 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout:- Groups : 0] 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout: $ sudo $CENGINE exec $ISCSI_CONTAINER gwcli ls iscsi-targets/ | grep 'o- hosts' | awk -F'[' '{print $2}' 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout:- Auth: ACL_ENABLED, Hosts: 0] 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout: 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout: Create the first gateway 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout: ======================== 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout: $ HOST=$(python3 -c "import socket; print(socket.getfqdn())") 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout: > IP=`hostname -i | awk '{print $1}'` 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout: > sudo $CENGINE exec $ISCSI_CONTAINER gwcli iscsi-targets/iqn.2003-01.com.redhat.iscsi-gw:ceph-gw/gateways create ip_addresses=$IP gateway_name=$HOST 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout:+ No such path /iscsi-targets/iqn.2003-01.com.redhat.iscsi-gw:ceph-gw 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout:+ [255] 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout: $ sudo $CENGINE exec $ISCSI_CONTAINER gwcli ls iscsi-targets/ | grep 'o- gateways' | awk -F'[' '{print $2}' 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout:- Up: 1/1, Portals: 1] 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout: 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout: Create the second gateway 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout: ======================== 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout:@@ -59,27 +53,29 @@ 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout: > HOST=$(python3 -c "import socket; print(socket.getfqdn('$IP'))") 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout: > sudo $CENGINE exec $ISCSI_CONTAINER gwcli iscsi-targets/iqn.2003-01.com.redhat.iscsi-gw:ceph-gw/gateways create ip_addresses=$IP gateway_name=$HOST 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout: > fi 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout:+ No such path /iscsi-targets/iqn.2003-01.com.redhat.iscsi-gw:ceph-gw 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout:+ [255] 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout: $ sudo $CENGINE exec $ISCSI_CONTAINER gwcli ls iscsi-targets/ | grep 'o- gateways' | awk -F'[' '{print $2}' 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout:- Up: 2/2, Portals: 2] 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout: 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout: Attach the disk 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout: =============== 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout: $ sudo $CENGINE exec $ISCSI_CONTAINER gwcli iscsi-targets/iqn.2003-01.com.redhat.iscsi-gw:ceph-gw/disks/ add disk=datapool/block0 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout:+ No such path /iscsi-targets/iqn.2003-01.com.redhat.iscsi-gw:ceph-gw 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout:+ [255] 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout: $ sudo $CENGINE exec $ISCSI_CONTAINER gwcli ls iscsi-targets/ | grep 'o- disks' | awk -F'[' '{print $2}' 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout:- Disks: 1] 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout: 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout: Create a host 2026-03-09T14:28:23.720 INFO:tasks.cram.client.0.vm03.stdout: ============= 2026-03-09T14:28:23.721 INFO:tasks.cram.client.0.vm03.stdout: $ sudo $CENGINE exec $ISCSI_CONTAINER gwcli iscsi-targets/iqn.2003-01.com.redhat.iscsi-gw:ceph-gw/hosts create client_iqn=iqn.1994-05.com.redhat:client 2026-03-09T14:28:23.721 INFO:tasks.cram.client.0.vm03.stdout:+ No such path /iscsi-targets/iqn.2003-01.com.redhat.iscsi-gw:ceph-gw 2026-03-09T14:28:23.721 INFO:tasks.cram.client.0.vm03.stdout:+ [255] 2026-03-09T14:28:23.721 INFO:tasks.cram.client.0.vm03.stdout: $ sudo $CENGINE exec $ISCSI_CONTAINER gwcli ls iscsi-targets/ | grep 'o- hosts' | awk -F'[' '{print $2}' 2026-03-09T14:28:23.721 INFO:tasks.cram.client.0.vm03.stdout:- Auth: ACL_ENABLED, Hosts: 1] 2026-03-09T14:28:23.721 INFO:tasks.cram.client.0.vm03.stdout: $ sudo $CENGINE exec $ISCSI_CONTAINER gwcli ls iscsi-targets/ | grep 'o- iqn.1994-05.com.redhat:client' | awk -F'[' '{print $2}' 2026-03-09T14:28:23.721 INFO:tasks.cram.client.0.vm03.stdout:- Auth: None, Disks: 0(0.00Y)] 2026-03-09T14:28:23.721 INFO:tasks.cram.client.0.vm03.stdout: 2026-03-09T14:28:23.721 INFO:tasks.cram.client.0.vm03.stdout: Map the LUN 2026-03-09T14:28:23.721 INFO:tasks.cram.client.0.vm03.stdout: =========== 2026-03-09T14:28:23.721 INFO:tasks.cram.client.0.vm03.stdout: $ sudo $CENGINE exec $ISCSI_CONTAINER gwcli iscsi-targets/iqn.2003-01.com.redhat.iscsi-gw:ceph-gw/hosts/iqn.1994-05.com.redhat:client disk disk=datapool/block0 2026-03-09T14:28:23.721 INFO:tasks.cram.client.0.vm03.stdout:+ No such path /iscsi-targets/iqn.2003-01.com.redhat.iscsi-gw:ceph-gw 2026-03-09T14:28:23.721 INFO:tasks.cram.client.0.vm03.stdout:+ [255] 2026-03-09T14:28:23.721 INFO:tasks.cram.client.0.vm03.stdout: $ sudo $CENGINE exec $ISCSI_CONTAINER gwcli ls iscsi-targets/ | grep 'o- hosts' | awk -F'[' '{print $2}' 2026-03-09T14:28:23.721 INFO:tasks.cram.client.0.vm03.stdout:- Auth: ACL_ENABLED, Hosts: 1] 2026-03-09T14:28:23.721 INFO:tasks.cram.client.0.vm03.stdout: $ sudo $CENGINE exec $ISCSI_CONTAINER gwcli ls iscsi-targets/ | grep 'o- iqn.1994-05.com.redhat:client' | awk -F'[' '{print $2}' 2026-03-09T14:28:23.721 INFO:tasks.cram.client.0.vm03.stdout:- Auth: None, Disks: 1(300M)] 2026-03-09T14:28:23.721 INFO:tasks.cram.client.0.vm03.stdout:# Ran 1 tests, 0 skipped, 1 failed. 2026-03-09T14:28:23.721 DEBUG:teuthology.orchestra.run.vm03:> test -f /home/ubuntu/cephtest/archive/cram.client.0/gwcli_create.t.err || rm -f -- /home/ubuntu/cephtest/archive/cram.client.0/gwcli_create.t 2026-03-09T14:28:23.725 DEBUG:teuthology.orchestra.run.vm03:> rm -rf -- /home/ubuntu/cephtest/virtualenv /home/ubuntu/cephtest/clone.client.0 ; rmdir --ignore-fail-on-non-empty /home/ubuntu/cephtest/archive/cram.client.0 2026-03-09T14:28:23.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:23 vm04 bash[19581]: audit 2026-03-09T14:28:22.490054+0000 mon.a (mon.0) 830 : audit [DBG] from='client.? 192.168.123.103:0/853831692' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:23.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:23 vm04 bash[19581]: audit 2026-03-09T14:28:22.490054+0000 mon.a (mon.0) 830 : audit [DBG] from='client.? 192.168.123.103:0/853831692' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:23.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:23 vm04 bash[19581]: audit 2026-03-09T14:28:22.506345+0000 mon.a (mon.0) 831 : audit [DBG] from='client.? 192.168.123.103:0/1365238852' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:23.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:23 vm04 bash[19581]: audit 2026-03-09T14:28:22.506345+0000 mon.a (mon.0) 831 : audit [DBG] from='client.? 192.168.123.103:0/1365238852' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:23.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:23 vm04 bash[19581]: audit 2026-03-09T14:28:22.513248+0000 mon.c (mon.1) 38 : audit [DBG] from='client.? 192.168.123.103:0/1854697985' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:23.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:23 vm04 bash[19581]: audit 2026-03-09T14:28:22.513248+0000 mon.c (mon.1) 38 : audit [DBG] from='client.? 192.168.123.103:0/1854697985' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:23.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:23 vm04 bash[19581]: audit 2026-03-09T14:28:22.550224+0000 mon.c (mon.1) 39 : audit [DBG] from='client.? 192.168.123.103:0/3992601671' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:23.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:23 vm04 bash[19581]: audit 2026-03-09T14:28:22.550224+0000 mon.c (mon.1) 39 : audit [DBG] from='client.? 192.168.123.103:0/3992601671' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:23.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:23 vm04 bash[19581]: audit 2026-03-09T14:28:22.556404+0000 mon.c (mon.1) 40 : audit [DBG] from='client.? 192.168.123.103:0/726917066' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:23.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:23 vm04 bash[19581]: audit 2026-03-09T14:28:22.556404+0000 mon.c (mon.1) 40 : audit [DBG] from='client.? 192.168.123.103:0/726917066' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:23.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:23 vm04 bash[19581]: audit 2026-03-09T14:28:22.872811+0000 mon.b (mon.2) 43 : audit [DBG] from='client.? 192.168.123.103:0/4274054927' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:23.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:23 vm04 bash[19581]: audit 2026-03-09T14:28:22.872811+0000 mon.b (mon.2) 43 : audit [DBG] from='client.? 192.168.123.103:0/4274054927' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:23.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:23 vm04 bash[19581]: audit 2026-03-09T14:28:22.888781+0000 mon.b (mon.2) 44 : audit [DBG] from='client.? 192.168.123.103:0/3298154411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:23.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:23 vm04 bash[19581]: audit 2026-03-09T14:28:22.888781+0000 mon.b (mon.2) 44 : audit [DBG] from='client.? 192.168.123.103:0/3298154411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:23.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:23 vm04 bash[19581]: audit 2026-03-09T14:28:22.896493+0000 mon.c (mon.1) 41 : audit [DBG] from='client.? 192.168.123.103:0/732561075' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:23.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:23 vm04 bash[19581]: audit 2026-03-09T14:28:22.896493+0000 mon.c (mon.1) 41 : audit [DBG] from='client.? 192.168.123.103:0/732561075' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:23 vm04 bash[19581]: audit 2026-03-09T14:28:22.933343+0000 mon.b (mon.2) 45 : audit [DBG] from='client.? 192.168.123.103:0/3818660876' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:23 vm04 bash[19581]: audit 2026-03-09T14:28:22.933343+0000 mon.b (mon.2) 45 : audit [DBG] from='client.? 192.168.123.103:0/3818660876' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:23 vm04 bash[19581]: audit 2026-03-09T14:28:22.939345+0000 mon.b (mon.2) 46 : audit [DBG] from='client.? 192.168.123.103:0/4232580951' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:23 vm04 bash[19581]: audit 2026-03-09T14:28:22.939345+0000 mon.b (mon.2) 46 : audit [DBG] from='client.? 192.168.123.103:0/4232580951' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:23 vm04 bash[19581]: audit 2026-03-09T14:28:23.246063+0000 mon.a (mon.0) 832 : audit [DBG] from='client.? 192.168.123.103:0/2990659624' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:23 vm04 bash[19581]: audit 2026-03-09T14:28:23.246063+0000 mon.a (mon.0) 832 : audit [DBG] from='client.? 192.168.123.103:0/2990659624' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:23 vm04 bash[19581]: audit 2026-03-09T14:28:23.261464+0000 mon.c (mon.1) 42 : audit [DBG] from='client.? 192.168.123.103:0/627664950' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:23 vm04 bash[19581]: audit 2026-03-09T14:28:23.261464+0000 mon.c (mon.1) 42 : audit [DBG] from='client.? 192.168.123.103:0/627664950' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:23 vm04 bash[19581]: audit 2026-03-09T14:28:23.267696+0000 mon.a (mon.0) 833 : audit [DBG] from='client.? 192.168.123.103:0/3796813018' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:23 vm04 bash[19581]: audit 2026-03-09T14:28:23.267696+0000 mon.a (mon.0) 833 : audit [DBG] from='client.? 192.168.123.103:0/3796813018' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:23 vm04 bash[19581]: audit 2026-03-09T14:28:23.303475+0000 mon.a (mon.0) 834 : audit [DBG] from='client.? 192.168.123.103:0/146878642' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:23 vm04 bash[19581]: audit 2026-03-09T14:28:23.303475+0000 mon.a (mon.0) 834 : audit [DBG] from='client.? 192.168.123.103:0/146878642' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:23 vm04 bash[19581]: audit 2026-03-09T14:28:23.309100+0000 mon.a (mon.0) 835 : audit [DBG] from='client.? 192.168.123.103:0/1911457727' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:23 vm04 bash[19581]: audit 2026-03-09T14:28:23.309100+0000 mon.a (mon.0) 835 : audit [DBG] from='client.? 192.168.123.103:0/1911457727' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:23 vm05 bash[20070]: audit 2026-03-09T14:28:22.490054+0000 mon.a (mon.0) 830 : audit [DBG] from='client.? 192.168.123.103:0/853831692' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:23 vm05 bash[20070]: audit 2026-03-09T14:28:22.490054+0000 mon.a (mon.0) 830 : audit [DBG] from='client.? 192.168.123.103:0/853831692' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:23 vm05 bash[20070]: audit 2026-03-09T14:28:22.506345+0000 mon.a (mon.0) 831 : audit [DBG] from='client.? 192.168.123.103:0/1365238852' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:23 vm05 bash[20070]: audit 2026-03-09T14:28:22.506345+0000 mon.a (mon.0) 831 : audit [DBG] from='client.? 192.168.123.103:0/1365238852' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:23 vm05 bash[20070]: audit 2026-03-09T14:28:22.513248+0000 mon.c (mon.1) 38 : audit [DBG] from='client.? 192.168.123.103:0/1854697985' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:23 vm05 bash[20070]: audit 2026-03-09T14:28:22.513248+0000 mon.c (mon.1) 38 : audit [DBG] from='client.? 192.168.123.103:0/1854697985' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:23 vm05 bash[20070]: audit 2026-03-09T14:28:22.550224+0000 mon.c (mon.1) 39 : audit [DBG] from='client.? 192.168.123.103:0/3992601671' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:23 vm05 bash[20070]: audit 2026-03-09T14:28:22.550224+0000 mon.c (mon.1) 39 : audit [DBG] from='client.? 192.168.123.103:0/3992601671' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:23 vm05 bash[20070]: audit 2026-03-09T14:28:22.556404+0000 mon.c (mon.1) 40 : audit [DBG] from='client.? 192.168.123.103:0/726917066' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:23 vm05 bash[20070]: audit 2026-03-09T14:28:22.556404+0000 mon.c (mon.1) 40 : audit [DBG] from='client.? 192.168.123.103:0/726917066' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:23 vm05 bash[20070]: audit 2026-03-09T14:28:22.872811+0000 mon.b (mon.2) 43 : audit [DBG] from='client.? 192.168.123.103:0/4274054927' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:23 vm05 bash[20070]: audit 2026-03-09T14:28:22.872811+0000 mon.b (mon.2) 43 : audit [DBG] from='client.? 192.168.123.103:0/4274054927' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:23 vm05 bash[20070]: audit 2026-03-09T14:28:22.888781+0000 mon.b (mon.2) 44 : audit [DBG] from='client.? 192.168.123.103:0/3298154411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:23 vm05 bash[20070]: audit 2026-03-09T14:28:22.888781+0000 mon.b (mon.2) 44 : audit [DBG] from='client.? 192.168.123.103:0/3298154411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:23 vm05 bash[20070]: audit 2026-03-09T14:28:22.896493+0000 mon.c (mon.1) 41 : audit [DBG] from='client.? 192.168.123.103:0/732561075' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:23 vm05 bash[20070]: audit 2026-03-09T14:28:22.896493+0000 mon.c (mon.1) 41 : audit [DBG] from='client.? 192.168.123.103:0/732561075' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:23 vm05 bash[20070]: audit 2026-03-09T14:28:22.933343+0000 mon.b (mon.2) 45 : audit [DBG] from='client.? 192.168.123.103:0/3818660876' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:23 vm05 bash[20070]: audit 2026-03-09T14:28:22.933343+0000 mon.b (mon.2) 45 : audit [DBG] from='client.? 192.168.123.103:0/3818660876' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:23 vm05 bash[20070]: audit 2026-03-09T14:28:22.939345+0000 mon.b (mon.2) 46 : audit [DBG] from='client.? 192.168.123.103:0/4232580951' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:23 vm05 bash[20070]: audit 2026-03-09T14:28:22.939345+0000 mon.b (mon.2) 46 : audit [DBG] from='client.? 192.168.123.103:0/4232580951' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:23 vm05 bash[20070]: audit 2026-03-09T14:28:23.246063+0000 mon.a (mon.0) 832 : audit [DBG] from='client.? 192.168.123.103:0/2990659624' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:23 vm05 bash[20070]: audit 2026-03-09T14:28:23.246063+0000 mon.a (mon.0) 832 : audit [DBG] from='client.? 192.168.123.103:0/2990659624' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:23 vm05 bash[20070]: audit 2026-03-09T14:28:23.261464+0000 mon.c (mon.1) 42 : audit [DBG] from='client.? 192.168.123.103:0/627664950' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:23 vm05 bash[20070]: audit 2026-03-09T14:28:23.261464+0000 mon.c (mon.1) 42 : audit [DBG] from='client.? 192.168.123.103:0/627664950' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:23 vm05 bash[20070]: audit 2026-03-09T14:28:23.267696+0000 mon.a (mon.0) 833 : audit [DBG] from='client.? 192.168.123.103:0/3796813018' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:23 vm05 bash[20070]: audit 2026-03-09T14:28:23.267696+0000 mon.a (mon.0) 833 : audit [DBG] from='client.? 192.168.123.103:0/3796813018' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:23 vm05 bash[20070]: audit 2026-03-09T14:28:23.303475+0000 mon.a (mon.0) 834 : audit [DBG] from='client.? 192.168.123.103:0/146878642' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:23 vm05 bash[20070]: audit 2026-03-09T14:28:23.303475+0000 mon.a (mon.0) 834 : audit [DBG] from='client.? 192.168.123.103:0/146878642' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:23 vm05 bash[20070]: audit 2026-03-09T14:28:23.309100+0000 mon.a (mon.0) 835 : audit [DBG] from='client.? 192.168.123.103:0/1911457727' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:23.756 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:23 vm05 bash[20070]: audit 2026-03-09T14:28:23.309100+0000 mon.a (mon.0) 835 : audit [DBG] from='client.? 192.168.123.103:0/1911457727' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:24.050 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:23 vm03 bash[37744]: debug ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:23] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:24.051 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:23 vm03 bash[37744]: ::ffff:127.0.0.1 - - [09/Mar/2026 14:28:23] "GET /api/config HTTP/1.1" 200 - 2026-03-09T14:28:24.175 DEBUG:teuthology.orchestra.run.vm04:> test -f /home/ubuntu/cephtest/archive/cram.client.1/iscsi_client.t.err || rm -f -- /home/ubuntu/cephtest/archive/cram.client.1/iscsi_client.t 2026-03-09T14:28:24.178 DEBUG:teuthology.orchestra.run.vm04:> rm -rf -- /home/ubuntu/cephtest/virtualenv /home/ubuntu/cephtest/clone.client.1 ; rmdir --ignore-fail-on-non-empty /home/ubuntu/cephtest/archive/cram.client.1 2026-03-09T14:28:24.633 DEBUG:teuthology.orchestra.run.vm05:> test -f /home/ubuntu/cephtest/archive/cram.client.2/gwcli_delete.t.err || rm -f -- /home/ubuntu/cephtest/archive/cram.client.2/gwcli_delete.t 2026-03-09T14:28:24.637 DEBUG:teuthology.orchestra.run.vm05:> rm -rf -- /home/ubuntu/cephtest/virtualenv /home/ubuntu/cephtest/clone.client.2 ; rmdir --ignore-fail-on-non-empty /home/ubuntu/cephtest/archive/cram.client.2 2026-03-09T14:28:24.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:24 vm04 bash[19581]: cluster 2026-03-09T14:28:22.996500+0000 mgr.x (mgr.14150) 501 : cluster [DBG] pgmap v397: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 3.5 KiB/s rd, 341 B/s wr, 5 op/s 2026-03-09T14:28:24.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:24 vm04 bash[19581]: cluster 2026-03-09T14:28:22.996500+0000 mgr.x (mgr.14150) 501 : cluster [DBG] pgmap v397: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 3.5 KiB/s rd, 341 B/s wr, 5 op/s 2026-03-09T14:28:24.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:24 vm04 bash[19581]: audit 2026-03-09T14:28:23.623508+0000 mon.a (mon.0) 836 : audit [DBG] from='client.? 192.168.123.103:0/4263241285' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:24.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:24 vm04 bash[19581]: audit 2026-03-09T14:28:23.623508+0000 mon.a (mon.0) 836 : audit [DBG] from='client.? 192.168.123.103:0/4263241285' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:24.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:24 vm04 bash[19581]: audit 2026-03-09T14:28:23.637899+0000 mon.a (mon.0) 837 : audit [DBG] from='client.? 192.168.123.103:0/1918176557' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:24.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:24 vm04 bash[19581]: audit 2026-03-09T14:28:23.637899+0000 mon.a (mon.0) 837 : audit [DBG] from='client.? 192.168.123.103:0/1918176557' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:24.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:24 vm04 bash[19581]: audit 2026-03-09T14:28:23.644013+0000 mon.a (mon.0) 838 : audit [DBG] from='client.? 192.168.123.103:0/4219259818' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:24.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:24 vm04 bash[19581]: audit 2026-03-09T14:28:23.644013+0000 mon.a (mon.0) 838 : audit [DBG] from='client.? 192.168.123.103:0/4219259818' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:24.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:24 vm04 bash[19581]: audit 2026-03-09T14:28:23.680276+0000 mon.a (mon.0) 839 : audit [DBG] from='client.? 192.168.123.103:0/2672890857' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:24.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:24 vm04 bash[19581]: audit 2026-03-09T14:28:23.680276+0000 mon.a (mon.0) 839 : audit [DBG] from='client.? 192.168.123.103:0/2672890857' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:24.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:24 vm04 bash[19581]: audit 2026-03-09T14:28:23.685716+0000 mon.a (mon.0) 840 : audit [DBG] from='client.? 192.168.123.103:0/516751860' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:24.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:24 vm04 bash[19581]: audit 2026-03-09T14:28:23.685716+0000 mon.a (mon.0) 840 : audit [DBG] from='client.? 192.168.123.103:0/516751860' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:24.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:24 vm05 bash[20070]: cluster 2026-03-09T14:28:22.996500+0000 mgr.x (mgr.14150) 501 : cluster [DBG] pgmap v397: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 3.5 KiB/s rd, 341 B/s wr, 5 op/s 2026-03-09T14:28:24.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:24 vm05 bash[20070]: cluster 2026-03-09T14:28:22.996500+0000 mgr.x (mgr.14150) 501 : cluster [DBG] pgmap v397: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 3.5 KiB/s rd, 341 B/s wr, 5 op/s 2026-03-09T14:28:24.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:24 vm05 bash[20070]: audit 2026-03-09T14:28:23.623508+0000 mon.a (mon.0) 836 : audit [DBG] from='client.? 192.168.123.103:0/4263241285' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:24.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:24 vm05 bash[20070]: audit 2026-03-09T14:28:23.623508+0000 mon.a (mon.0) 836 : audit [DBG] from='client.? 192.168.123.103:0/4263241285' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:24.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:24 vm05 bash[20070]: audit 2026-03-09T14:28:23.637899+0000 mon.a (mon.0) 837 : audit [DBG] from='client.? 192.168.123.103:0/1918176557' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:24.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:24 vm05 bash[20070]: audit 2026-03-09T14:28:23.637899+0000 mon.a (mon.0) 837 : audit [DBG] from='client.? 192.168.123.103:0/1918176557' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:24.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:24 vm05 bash[20070]: audit 2026-03-09T14:28:23.644013+0000 mon.a (mon.0) 838 : audit [DBG] from='client.? 192.168.123.103:0/4219259818' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:24.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:24 vm05 bash[20070]: audit 2026-03-09T14:28:23.644013+0000 mon.a (mon.0) 838 : audit [DBG] from='client.? 192.168.123.103:0/4219259818' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:24.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:24 vm05 bash[20070]: audit 2026-03-09T14:28:23.680276+0000 mon.a (mon.0) 839 : audit [DBG] from='client.? 192.168.123.103:0/2672890857' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:24.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:24 vm05 bash[20070]: audit 2026-03-09T14:28:23.680276+0000 mon.a (mon.0) 839 : audit [DBG] from='client.? 192.168.123.103:0/2672890857' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:24.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:24 vm05 bash[20070]: audit 2026-03-09T14:28:23.685716+0000 mon.a (mon.0) 840 : audit [DBG] from='client.? 192.168.123.103:0/516751860' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:24.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:24 vm05 bash[20070]: audit 2026-03-09T14:28:23.685716+0000 mon.a (mon.0) 840 : audit [DBG] from='client.? 192.168.123.103:0/516751860' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:24.801 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:24 vm03 bash[17524]: cluster 2026-03-09T14:28:22.996500+0000 mgr.x (mgr.14150) 501 : cluster [DBG] pgmap v397: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 3.5 KiB/s rd, 341 B/s wr, 5 op/s 2026-03-09T14:28:24.801 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:24 vm03 bash[17524]: cluster 2026-03-09T14:28:22.996500+0000 mgr.x (mgr.14150) 501 : cluster [DBG] pgmap v397: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 3.5 KiB/s rd, 341 B/s wr, 5 op/s 2026-03-09T14:28:24.801 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:24 vm03 bash[17524]: audit 2026-03-09T14:28:23.623508+0000 mon.a (mon.0) 836 : audit [DBG] from='client.? 192.168.123.103:0/4263241285' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:24.801 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:24 vm03 bash[17524]: audit 2026-03-09T14:28:23.623508+0000 mon.a (mon.0) 836 : audit [DBG] from='client.? 192.168.123.103:0/4263241285' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-09T14:28:24.801 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:24 vm03 bash[17524]: audit 2026-03-09T14:28:23.637899+0000 mon.a (mon.0) 837 : audit [DBG] from='client.? 192.168.123.103:0/1918176557' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:24.801 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:24 vm03 bash[17524]: audit 2026-03-09T14:28:23.637899+0000 mon.a (mon.0) 837 : audit [DBG] from='client.? 192.168.123.103:0/1918176557' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:28:24.801 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:24 vm03 bash[17524]: audit 2026-03-09T14:28:23.644013+0000 mon.a (mon.0) 838 : audit [DBG] from='client.? 192.168.123.103:0/4219259818' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:24.801 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:24 vm03 bash[17524]: audit 2026-03-09T14:28:23.644013+0000 mon.a (mon.0) 838 : audit [DBG] from='client.? 192.168.123.103:0/4219259818' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:24.801 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:24 vm03 bash[17524]: audit 2026-03-09T14:28:23.680276+0000 mon.a (mon.0) 839 : audit [DBG] from='client.? 192.168.123.103:0/2672890857' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:24.801 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:24 vm03 bash[17524]: audit 2026-03-09T14:28:23.680276+0000 mon.a (mon.0) 839 : audit [DBG] from='client.? 192.168.123.103:0/2672890857' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-09T14:28:24.801 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:24 vm03 bash[17524]: audit 2026-03-09T14:28:23.685716+0000 mon.a (mon.0) 840 : audit [DBG] from='client.? 192.168.123.103:0/516751860' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:24.801 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:24 vm03 bash[17524]: audit 2026-03-09T14:28:23.685716+0000 mon.a (mon.0) 840 : audit [DBG] from='client.? 192.168.123.103:0/516751860' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-09T14:28:25.036 ERROR:teuthology.run_tasks:Saw exception from tasks. Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 105, in run_tasks manager = run_one_task(taskname, ctx=ctx, config=config) File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 83, in run_one_task return task(**kwargs) File "/home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks/cram.py", line 97, in task _run_tests(ctx, role) File "/home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks/cram.py", line 147, in _run_tests remote.run( File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm03 with status 1: 'CEPH_REF=master CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /home/ubuntu/cephtest/virtualenv/bin/cram -v -- /home/ubuntu/cephtest/archive/cram.client.0/*.t' 2026-03-09T14:28:25.037 DEBUG:teuthology.run_tasks:Unwinding manager ceph_iscsi_client 2026-03-09T14:28:25.039 DEBUG:teuthology.run_tasks:Unwinding manager install 2026-03-09T14:28:25.041 INFO:teuthology.task.install.util:Removing shipped files: /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer... 2026-03-09T14:28:25.041 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-09T14:28:25.042 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-09T14:28:25.043 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-09T14:28:25.077 INFO:teuthology.task.install.deb:Removing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on Debian system. 2026-03-09T14:28:25.077 DEBUG:teuthology.orchestra.run.vm03:> for d in ceph cephadm ceph-mds ceph-mgr ceph-common ceph-fuse ceph-test ceph-volume radosgw python3-rados python3-rgw python3-cephfs python3-rbd libcephfs2 libcephfs-dev librados2 librbd1 rbd-fuse ; do sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" purge $d || true ; done 2026-03-09T14:28:25.083 INFO:teuthology.task.install.deb:Removing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on Debian system. 2026-03-09T14:28:25.083 DEBUG:teuthology.orchestra.run.vm04:> for d in ceph cephadm ceph-mds ceph-mgr ceph-common ceph-fuse ceph-test ceph-volume radosgw python3-rados python3-rgw python3-cephfs python3-rbd libcephfs2 libcephfs-dev librados2 librbd1 rbd-fuse ; do sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" purge $d || true ; done 2026-03-09T14:28:25.089 INFO:teuthology.task.install.deb:Removing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on Debian system. 2026-03-09T14:28:25.090 DEBUG:teuthology.orchestra.run.vm05:> for d in ceph cephadm ceph-mds ceph-mgr ceph-common ceph-fuse ceph-test ceph-volume radosgw python3-rados python3-rgw python3-cephfs python3-rbd libcephfs2 libcephfs-dev librados2 librbd1 rbd-fuse ; do sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" purge $d || true ; done 2026-03-09T14:28:25.124 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T14:28:25.133 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T14:28:25.149 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-09T14:28:25.296 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-09T14:28:25.297 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-09T14:28:25.307 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T14:28:25.308 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T14:28:25.321 INFO:teuthology.orchestra.run.vm05.stdout:Building dependency tree... 2026-03-09T14:28:25.321 INFO:teuthology.orchestra.run.vm05.stdout:Reading state information... 2026-03-09T14:28:25.471 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:25.472 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mon libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T14:28:25.472 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:25.481 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-09T14:28:25.482 INFO:teuthology.orchestra.run.vm04.stdout: ceph* 2026-03-09T14:28:25.489 INFO:teuthology.orchestra.run.vm05.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:25.489 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mon libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T14:28:25.489 INFO:teuthology.orchestra.run.vm05.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:25.497 INFO:teuthology.orchestra.run.vm05.stdout:The following packages will be REMOVED: 2026-03-09T14:28:25.497 INFO:teuthology.orchestra.run.vm05.stdout: ceph* 2026-03-09T14:28:25.502 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:25.503 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mon libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T14:28:25.503 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:25.514 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-09T14:28:25.515 INFO:teuthology.orchestra.run.vm03.stdout: ceph* 2026-03-09T14:28:25.648 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T14:28:25.648 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 47.1 kB disk space will be freed. 2026-03-09T14:28:25.662 INFO:teuthology.orchestra.run.vm05.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T14:28:25.662 INFO:teuthology.orchestra.run.vm05.stdout:After this operation, 47.1 kB disk space will be freed. 2026-03-09T14:28:25.688 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118698 files and directories currently installed.) 2026-03-09T14:28:25.690 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:25.700 INFO:teuthology.orchestra.run.vm05.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118698 files and directories currently installed.) 2026-03-09T14:28:25.701 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T14:28:25.701 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 47.1 kB disk space will be freed. 2026-03-09T14:28:25.703 INFO:teuthology.orchestra.run.vm05.stdout:Removing ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:25.740 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118698 files and directories currently installed.) 2026-03-09T14:28:25.742 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:26.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:26 vm04 bash[19581]: cluster 2026-03-09T14:28:24.996772+0000 mgr.x (mgr.14150) 502 : cluster [DBG] pgmap v398: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 5.3 KiB/s rd, 341 B/s wr, 8 op/s 2026-03-09T14:28:26.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:26 vm04 bash[19581]: cluster 2026-03-09T14:28:24.996772+0000 mgr.x (mgr.14150) 502 : cluster [DBG] pgmap v398: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 5.3 KiB/s rd, 341 B/s wr, 8 op/s 2026-03-09T14:28:26.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:26 vm05 bash[20070]: cluster 2026-03-09T14:28:24.996772+0000 mgr.x (mgr.14150) 502 : cluster [DBG] pgmap v398: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 5.3 KiB/s rd, 341 B/s wr, 8 op/s 2026-03-09T14:28:26.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:26 vm05 bash[20070]: cluster 2026-03-09T14:28:24.996772+0000 mgr.x (mgr.14150) 502 : cluster [DBG] pgmap v398: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 5.3 KiB/s rd, 341 B/s wr, 8 op/s 2026-03-09T14:28:26.771 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:26.795 INFO:teuthology.orchestra.run.vm05.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:26.801 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:26 vm03 bash[17524]: cluster 2026-03-09T14:28:24.996772+0000 mgr.x (mgr.14150) 502 : cluster [DBG] pgmap v398: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 5.3 KiB/s rd, 341 B/s wr, 8 op/s 2026-03-09T14:28:26.801 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:26 vm03 bash[17524]: cluster 2026-03-09T14:28:24.996772+0000 mgr.x (mgr.14150) 502 : cluster [DBG] pgmap v398: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 5.3 KiB/s rd, 341 B/s wr, 8 op/s 2026-03-09T14:28:26.803 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T14:28:26.828 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-09T14:28:26.839 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:26.871 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T14:28:26.982 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-09T14:28:26.983 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-09T14:28:27.005 INFO:teuthology.orchestra.run.vm05.stdout:Building dependency tree... 2026-03-09T14:28:27.006 INFO:teuthology.orchestra.run.vm05.stdout:Reading state information... 2026-03-09T14:28:27.059 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T14:28:27.059 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T14:28:27.135 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:27.136 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mon libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T14:28:27.136 INFO:teuthology.orchestra.run.vm04.stdout: python-asyncssh-doc python3-asyncssh 2026-03-09T14:28:27.136 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:27.149 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-09T14:28:27.149 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-cephadm* cephadm* 2026-03-09T14:28:27.182 INFO:teuthology.orchestra.run.vm05.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:27.183 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mon libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T14:28:27.183 INFO:teuthology.orchestra.run.vm05.stdout: python-asyncssh-doc python3-asyncssh 2026-03-09T14:28:27.183 INFO:teuthology.orchestra.run.vm05.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:27.194 INFO:teuthology.orchestra.run.vm05.stdout:The following packages will be REMOVED: 2026-03-09T14:28:27.195 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-cephadm* cephadm* 2026-03-09T14:28:27.238 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:27.238 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mon libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T14:28:27.238 INFO:teuthology.orchestra.run.vm03.stdout: python-asyncssh-doc python3-asyncssh 2026-03-09T14:28:27.238 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:27.248 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-09T14:28:27.248 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-cephadm* cephadm* 2026-03-09T14:28:27.322 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-09T14:28:27.322 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 1775 kB disk space will be freed. 2026-03-09T14:28:27.360 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118696 files and directories currently installed.) 2026-03-09T14:28:27.361 INFO:teuthology.orchestra.run.vm05.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-09T14:28:27.361 INFO:teuthology.orchestra.run.vm05.stdout:After this operation, 1775 kB disk space will be freed. 2026-03-09T14:28:27.362 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:27.380 INFO:teuthology.orchestra.run.vm04.stdout:Removing cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:27.398 INFO:teuthology.orchestra.run.vm05.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118696 files and directories currently installed.) 2026-03-09T14:28:27.400 INFO:teuthology.orchestra.run.vm05.stdout:Removing ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:27.409 INFO:teuthology.orchestra.run.vm04.stdout:Looking for files to backup/remove ... 2026-03-09T14:28:27.410 INFO:teuthology.orchestra.run.vm04.stdout:Not backing up/removing `/var/lib/cephadm', it matches ^/var/.*. 2026-03-09T14:28:27.411 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-09T14:28:27.411 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 1775 kB disk space will be freed. 2026-03-09T14:28:27.411 INFO:teuthology.orchestra.run.vm04.stdout:Removing user `cephadm' ... 2026-03-09T14:28:27.412 INFO:teuthology.orchestra.run.vm04.stdout:Warning: group `nogroup' has no more members. 2026-03-09T14:28:27.417 INFO:teuthology.orchestra.run.vm05.stdout:Removing cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:27.422 INFO:teuthology.orchestra.run.vm04.stdout:Done. 2026-03-09T14:28:27.444 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T14:28:27.445 INFO:teuthology.orchestra.run.vm05.stdout:Looking for files to backup/remove ... 2026-03-09T14:28:27.446 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118696 files and directories currently installed.) 2026-03-09T14:28:27.446 INFO:teuthology.orchestra.run.vm05.stdout:Not backing up/removing `/var/lib/cephadm', it matches ^/var/.*. 2026-03-09T14:28:27.448 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:27.448 INFO:teuthology.orchestra.run.vm05.stdout:Removing user `cephadm' ... 2026-03-09T14:28:27.448 INFO:teuthology.orchestra.run.vm05.stdout:Warning: group `nogroup' has no more members. 2026-03-09T14:28:27.459 INFO:teuthology.orchestra.run.vm05.stdout:Done. 2026-03-09T14:28:27.465 INFO:teuthology.orchestra.run.vm03.stdout:Removing cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:27.481 INFO:teuthology.orchestra.run.vm05.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T14:28:27.497 INFO:teuthology.orchestra.run.vm03.stdout:Looking for files to backup/remove ... 2026-03-09T14:28:27.498 INFO:teuthology.orchestra.run.vm03.stdout:Not backing up/removing `/var/lib/cephadm', it matches ^/var/.*. 2026-03-09T14:28:27.500 INFO:teuthology.orchestra.run.vm03.stdout:Removing user `cephadm' ... 2026-03-09T14:28:27.500 INFO:teuthology.orchestra.run.vm03.stdout:Warning: group `nogroup' has no more members. 2026-03-09T14:28:27.510 INFO:teuthology.orchestra.run.vm03.stdout:Done. 2026-03-09T14:28:27.534 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T14:28:27.555 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118622 files and directories currently installed.) 2026-03-09T14:28:27.557 INFO:teuthology.orchestra.run.vm04.stdout:Purging configuration files for cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:27.590 INFO:teuthology.orchestra.run.vm05.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118622 files and directories currently installed.) 2026-03-09T14:28:27.592 INFO:teuthology.orchestra.run.vm05.stdout:Purging configuration files for cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:27.659 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118622 files and directories currently installed.) 2026-03-09T14:28:27.662 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:28.301 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:27 vm03 bash[37744]: debug there is no tcmu-runner data available 2026-03-09T14:28:28.630 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:28.663 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T14:28:28.673 INFO:teuthology.orchestra.run.vm05.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:28.709 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-09T14:28:28.754 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:28.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:28 vm04 bash[19581]: cluster 2026-03-09T14:28:26.997032+0000 mgr.x (mgr.14150) 503 : cluster [DBG] pgmap v399: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 4.5 KiB/s rd, 341 B/s wr, 7 op/s 2026-03-09T14:28:28.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:28 vm04 bash[19581]: cluster 2026-03-09T14:28:26.997032+0000 mgr.x (mgr.14150) 503 : cluster [DBG] pgmap v399: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 4.5 KiB/s rd, 341 B/s wr, 7 op/s 2026-03-09T14:28:28.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:28 vm05 bash[20070]: cluster 2026-03-09T14:28:26.997032+0000 mgr.x (mgr.14150) 503 : cluster [DBG] pgmap v399: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 4.5 KiB/s rd, 341 B/s wr, 7 op/s 2026-03-09T14:28:28.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:28 vm05 bash[20070]: cluster 2026-03-09T14:28:26.997032+0000 mgr.x (mgr.14150) 503 : cluster [DBG] pgmap v399: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 4.5 KiB/s rd, 341 B/s wr, 7 op/s 2026-03-09T14:28:28.756 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:28 vm03 bash[17524]: cluster 2026-03-09T14:28:26.997032+0000 mgr.x (mgr.14150) 503 : cluster [DBG] pgmap v399: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 4.5 KiB/s rd, 341 B/s wr, 7 op/s 2026-03-09T14:28:28.756 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:28 vm03 bash[17524]: cluster 2026-03-09T14:28:26.997032+0000 mgr.x (mgr.14150) 503 : cluster [DBG] pgmap v399: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 4.5 KiB/s rd, 341 B/s wr, 7 op/s 2026-03-09T14:28:28.787 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T14:28:28.850 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-09T14:28:28.850 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-09T14:28:28.888 INFO:teuthology.orchestra.run.vm05.stdout:Building dependency tree... 2026-03-09T14:28:28.889 INFO:teuthology.orchestra.run.vm05.stdout:Reading state information... 2026-03-09T14:28:28.983 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T14:28:28.984 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T14:28:28.994 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:28.995 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mon libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T14:28:28.995 INFO:teuthology.orchestra.run.vm04.stdout: python-asyncssh-doc python3-asyncssh 2026-03-09T14:28:28.995 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:29.005 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:28 vm05 bash[38699]: debug there is no tcmu-runner data available 2026-03-09T14:28:29.008 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-09T14:28:29.009 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mds* 2026-03-09T14:28:29.070 INFO:teuthology.orchestra.run.vm05.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:29.070 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mon libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T14:28:29.071 INFO:teuthology.orchestra.run.vm05.stdout: python-asyncssh-doc python3-asyncssh 2026-03-09T14:28:29.071 INFO:teuthology.orchestra.run.vm05.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:29.081 INFO:teuthology.orchestra.run.vm05.stdout:The following packages will be REMOVED: 2026-03-09T14:28:29.082 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mds* 2026-03-09T14:28:29.145 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:29.146 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mon libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T14:28:29.146 INFO:teuthology.orchestra.run.vm03.stdout: python-asyncssh-doc python3-asyncssh 2026-03-09T14:28:29.146 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:29.156 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-09T14:28:29.156 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mds* 2026-03-09T14:28:29.179 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T14:28:29.179 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 7437 kB disk space will be freed. 2026-03-09T14:28:29.213 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118622 files and directories currently installed.) 2026-03-09T14:28:29.214 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:29.256 INFO:teuthology.orchestra.run.vm05.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T14:28:29.256 INFO:teuthology.orchestra.run.vm05.stdout:After this operation, 7437 kB disk space will be freed. 2026-03-09T14:28:29.295 INFO:teuthology.orchestra.run.vm05.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118622 files and directories currently installed.) 2026-03-09T14:28:29.297 INFO:teuthology.orchestra.run.vm05.stdout:Removing ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:29.319 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T14:28:29.320 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 7437 kB disk space will be freed. 2026-03-09T14:28:29.353 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118622 files and directories currently installed.) 2026-03-09T14:28:29.355 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:29.495 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:29 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:29.495 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:29 vm04 bash[19581]: audit 2026-03-09T14:28:27.917603+0000 mgr.x (mgr.14150) 504 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:29.495 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:29 vm04 bash[19581]: audit 2026-03-09T14:28:27.917603+0000 mgr.x (mgr.14150) 504 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:29.495 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:29 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:29.496 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:29 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:29.496 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:29 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:29.683 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:29 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:29.683 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:29 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:29.683 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:29 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:29.683 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:29 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:29.683 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:29 vm05 bash[20070]: audit 2026-03-09T14:28:27.917603+0000 mgr.x (mgr.14150) 504 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:29.683 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:29 vm05 bash[20070]: audit 2026-03-09T14:28:27.917603+0000 mgr.x (mgr.14150) 504 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:29.683 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:29 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:29.724 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T14:28:29.731 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:29 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:29.731 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:29 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:29.732 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:29 vm03 bash[17524]: audit 2026-03-09T14:28:27.917603+0000 mgr.x (mgr.14150) 504 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:29.732 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:29 vm03 bash[17524]: audit 2026-03-09T14:28:27.917603+0000 mgr.x (mgr.14150) 504 : audit [DBG] from='client.14427 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:29.732 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:29 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:29.732 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:29 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:29.732 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:29 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:29.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:29 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:29.755 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:29 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:29.755 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:29 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:29.755 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:29 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:29.786 INFO:teuthology.orchestra.run.vm05.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T14:28:29.819 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T14:28:29.829 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118614 files and directories currently installed.) 2026-03-09T14:28:29.831 INFO:teuthology.orchestra.run.vm04.stdout:Purging configuration files for ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:29.902 INFO:teuthology.orchestra.run.vm05.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118614 files and directories currently installed.) 2026-03-09T14:28:29.904 INFO:teuthology.orchestra.run.vm05.stdout:Purging configuration files for ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:29.925 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118614 files and directories currently installed.) 2026-03-09T14:28:29.926 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:30.005 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:29 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:30.005 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:29 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:30.005 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:29 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:30.006 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:29 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:30.006 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:29 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:30.050 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:29 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:30.050 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:29 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:30.051 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:29 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:30.051 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:29 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:30.051 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:29 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:30.189 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:29 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:30.189 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:29 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:30.189 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:29 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:30.190 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:29 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:30.482 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:30 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:30.482 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:30 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:30.482 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:30 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:30.482 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:30 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:30.482 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:30 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:30.482 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:30 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:30.482 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:30 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:30.482 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:30 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:30.482 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:30 vm03 bash[17524]: audit 2026-03-09T14:28:28.891190+0000 mgr.x (mgr.14150) 505 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:30.483 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:30 vm03 bash[17524]: audit 2026-03-09T14:28:28.891190+0000 mgr.x (mgr.14150) 505 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:30.483 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:30 vm03 bash[17524]: cluster 2026-03-09T14:28:28.997333+0000 mgr.x (mgr.14150) 506 : cluster [DBG] pgmap v400: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 4.5 KiB/s rd, 341 B/s wr, 7 op/s 2026-03-09T14:28:30.483 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:30 vm03 bash[17524]: cluster 2026-03-09T14:28:28.997333+0000 mgr.x (mgr.14150) 506 : cluster [DBG] pgmap v400: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 4.5 KiB/s rd, 341 B/s wr, 7 op/s 2026-03-09T14:28:30.483 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:30 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:30.483 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:30 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:30.483 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:30 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:30.483 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:30 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:30.483 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:30 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:30.483 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:30 vm04 bash[19581]: audit 2026-03-09T14:28:28.891190+0000 mgr.x (mgr.14150) 505 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:30.483 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:30 vm04 bash[19581]: audit 2026-03-09T14:28:28.891190+0000 mgr.x (mgr.14150) 505 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:30.483 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:30 vm04 bash[19581]: cluster 2026-03-09T14:28:28.997333+0000 mgr.x (mgr.14150) 506 : cluster [DBG] pgmap v400: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 4.5 KiB/s rd, 341 B/s wr, 7 op/s 2026-03-09T14:28:30.483 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:30 vm04 bash[19581]: cluster 2026-03-09T14:28:28.997333+0000 mgr.x (mgr.14150) 506 : cluster [DBG] pgmap v400: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 4.5 KiB/s rd, 341 B/s wr, 7 op/s 2026-03-09T14:28:30.483 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:30 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:30.483 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:30 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:30.483 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:30 vm05 bash[20070]: audit 2026-03-09T14:28:28.891190+0000 mgr.x (mgr.14150) 505 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:30.483 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:30 vm05 bash[20070]: audit 2026-03-09T14:28:28.891190+0000 mgr.x (mgr.14150) 505 : audit [DBG] from='client.24376 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:28:30.483 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:30 vm05 bash[20070]: cluster 2026-03-09T14:28:28.997333+0000 mgr.x (mgr.14150) 506 : cluster [DBG] pgmap v400: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 4.5 KiB/s rd, 341 B/s wr, 7 op/s 2026-03-09T14:28:30.483 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:30 vm05 bash[20070]: cluster 2026-03-09T14:28:28.997333+0000 mgr.x (mgr.14150) 506 : cluster [DBG] pgmap v400: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 4.5 KiB/s rd, 341 B/s wr, 7 op/s 2026-03-09T14:28:30.483 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:30 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:30.483 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:30 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:30.483 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:30 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:30.483 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:30 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:30.483 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:30 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:30.484 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:30 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:30.484 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:30 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:30.484 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:30 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:30.484 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:30 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:31.292 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:31.325 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T14:28:31.351 INFO:teuthology.orchestra.run.vm05.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:31.383 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-09T14:28:31.447 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:31.480 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T14:28:31.504 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-09T14:28:31.505 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-09T14:28:31.546 INFO:teuthology.orchestra.run.vm05.stdout:Building dependency tree... 2026-03-09T14:28:31.547 INFO:teuthology.orchestra.run.vm05.stdout:Reading state information... 2026-03-09T14:28:31.618 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:31.618 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core ceph-mon libboost-iostreams1.74.0 2026-03-09T14:28:31.618 INFO:teuthology.orchestra.run.vm04.stdout: libboost-thread1.74.0 libpmemobj1 python-asyncssh-doc python-pastedeploy-tpl 2026-03-09T14:28:31.618 INFO:teuthology.orchestra.run.vm04.stdout: python3-asyncssh python3-cachetools python3-cheroot python3-cherrypy3 2026-03-09T14:28:31.618 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:31.618 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:31.618 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:31.618 INFO:teuthology.orchestra.run.vm04.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:31.618 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T14:28:31.618 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T14:28:31.618 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T14:28:31.618 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T14:28:31.618 INFO:teuthology.orchestra.run.vm04.stdout: python3-waitress python3-webob python3-websocket python3-webtest 2026-03-09T14:28:31.618 INFO:teuthology.orchestra.run.vm04.stdout: python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:31.618 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:31.629 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-09T14:28:31.629 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr* ceph-mgr-dashboard* ceph-mgr-diskprediction-local* 2026-03-09T14:28:31.629 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-k8sevents* 2026-03-09T14:28:31.650 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T14:28:31.650 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T14:28:31.684 INFO:teuthology.orchestra.run.vm05.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:31.684 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core ceph-mon libboost-iostreams1.74.0 2026-03-09T14:28:31.684 INFO:teuthology.orchestra.run.vm05.stdout: libboost-thread1.74.0 libpmemobj1 python-asyncssh-doc python-pastedeploy-tpl 2026-03-09T14:28:31.684 INFO:teuthology.orchestra.run.vm05.stdout: python3-asyncssh python3-cachetools python3-cheroot python3-cherrypy3 2026-03-09T14:28:31.684 INFO:teuthology.orchestra.run.vm05.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:31.684 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:31.684 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:31.684 INFO:teuthology.orchestra.run.vm05.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:31.684 INFO:teuthology.orchestra.run.vm05.stdout: python3-portend python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T14:28:31.684 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T14:28:31.684 INFO:teuthology.orchestra.run.vm05.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T14:28:31.684 INFO:teuthology.orchestra.run.vm05.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T14:28:31.684 INFO:teuthology.orchestra.run.vm05.stdout: python3-waitress python3-webob python3-websocket python3-webtest 2026-03-09T14:28:31.684 INFO:teuthology.orchestra.run.vm05.stdout: python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:31.685 INFO:teuthology.orchestra.run.vm05.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:31.695 INFO:teuthology.orchestra.run.vm05.stdout:The following packages will be REMOVED: 2026-03-09T14:28:31.695 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr* ceph-mgr-dashboard* ceph-mgr-diskprediction-local* 2026-03-09T14:28:31.696 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-k8sevents* 2026-03-09T14:28:31.789 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:31.789 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core ceph-mon libboost-iostreams1.74.0 2026-03-09T14:28:31.790 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libpmemobj1 python-asyncssh-doc python-pastedeploy-tpl 2026-03-09T14:28:31.790 INFO:teuthology.orchestra.run.vm03.stdout: python3-asyncssh python3-cachetools python3-cheroot python3-cherrypy3 2026-03-09T14:28:31.790 INFO:teuthology.orchestra.run.vm03.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:31.790 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:31.790 INFO:teuthology.orchestra.run.vm03.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:31.790 INFO:teuthology.orchestra.run.vm03.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:31.790 INFO:teuthology.orchestra.run.vm03.stdout: python3-portend python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T14:28:31.790 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T14:28:31.790 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T14:28:31.790 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T14:28:31.790 INFO:teuthology.orchestra.run.vm03.stdout: python3-waitress python3-webob python3-websocket python3-webtest 2026-03-09T14:28:31.790 INFO:teuthology.orchestra.run.vm03.stdout: python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:31.791 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:31.797 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 4 to remove and 10 not upgraded. 2026-03-09T14:28:31.797 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 165 MB disk space will be freed. 2026-03-09T14:28:31.803 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-09T14:28:31.803 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr* ceph-mgr-dashboard* ceph-mgr-diskprediction-local* 2026-03-09T14:28:31.803 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-k8sevents* 2026-03-09T14:28:31.832 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118614 files and directories currently installed.) 2026-03-09T14:28:31.834 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:31.846 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:31.867 INFO:teuthology.orchestra.run.vm05.stdout:0 upgraded, 0 newly installed, 4 to remove and 10 not upgraded. 2026-03-09T14:28:31.867 INFO:teuthology.orchestra.run.vm05.stdout:After this operation, 165 MB disk space will be freed. 2026-03-09T14:28:31.872 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:31.901 INFO:teuthology.orchestra.run.vm05.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118614 files and directories currently installed.) 2026-03-09T14:28:31.903 INFO:teuthology.orchestra.run.vm05.stdout:Removing ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:31.910 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:31.914 INFO:teuthology.orchestra.run.vm05.stdout:Removing ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:31.938 INFO:teuthology.orchestra.run.vm05.stdout:Removing ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:31.960 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 4 to remove and 10 not upgraded. 2026-03-09T14:28:31.960 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 165 MB disk space will be freed. 2026-03-09T14:28:31.976 INFO:teuthology.orchestra.run.vm05.stdout:Removing ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:31.994 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118614 files and directories currently installed.) 2026-03-09T14:28:31.996 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:32.005 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:32.030 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:32.067 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:32.255 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:32 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:32.255 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:32 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:32.255 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:32 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:32.255 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:32 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:32.255 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:32 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:32.255 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:32 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:32.255 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:32 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:32.255 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:32 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:32.255 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:32 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:32.406 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118030 files and directories currently installed.) 2026-03-09T14:28:32.408 INFO:teuthology.orchestra.run.vm04.stdout:Purging configuration files for ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:32.430 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:32 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:32.430 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:32 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:32.430 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:32 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:32.430 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:32 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:32.430 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:32 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:32.469 INFO:teuthology.orchestra.run.vm05.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118030 files and directories currently installed.) 2026-03-09T14:28:32.471 INFO:teuthology.orchestra.run.vm05.stdout:Purging configuration files for ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:32.571 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:32 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:32.571 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:32 vm04 bash[19581]: cluster 2026-03-09T14:28:30.997625+0000 mgr.x (mgr.14150) 507 : cluster [DBG] pgmap v401: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 5.2 KiB/s rd, 170 B/s wr, 7 op/s 2026-03-09T14:28:32.571 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:32 vm04 bash[19581]: cluster 2026-03-09T14:28:30.997625+0000 mgr.x (mgr.14150) 507 : cluster [DBG] pgmap v401: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 5.2 KiB/s rd, 170 B/s wr, 7 op/s 2026-03-09T14:28:32.571 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:32 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:32.571 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:32 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:32.571 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:32 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:32.580 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118030 files and directories currently installed.) 2026-03-09T14:28:32.582 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:32.647 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:32 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:32.647 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:32 vm05 bash[20070]: cluster 2026-03-09T14:28:30.997625+0000 mgr.x (mgr.14150) 507 : cluster [DBG] pgmap v401: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 5.2 KiB/s rd, 170 B/s wr, 7 op/s 2026-03-09T14:28:32.647 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:32 vm05 bash[20070]: cluster 2026-03-09T14:28:30.997625+0000 mgr.x (mgr.14150) 507 : cluster [DBG] pgmap v401: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 5.2 KiB/s rd, 170 B/s wr, 7 op/s 2026-03-09T14:28:32.647 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:32 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:32.647 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:32 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:32.647 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:32 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:32.647 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:32 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:32.744 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:32 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:32.744 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:32 vm03 bash[17524]: cluster 2026-03-09T14:28:30.997625+0000 mgr.x (mgr.14150) 507 : cluster [DBG] pgmap v401: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 5.2 KiB/s rd, 170 B/s wr, 7 op/s 2026-03-09T14:28:32.744 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:32 vm03 bash[17524]: cluster 2026-03-09T14:28:30.997625+0000 mgr.x (mgr.14150) 507 : cluster [DBG] pgmap v401: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 5.2 KiB/s rd, 170 B/s wr, 7 op/s 2026-03-09T14:28:32.744 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:32 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:32.744 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:32 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:32.744 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:32 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:32.744 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:32 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:33.005 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:32 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:33.005 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:32 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:33.005 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:32 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:33.005 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:32 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:33.005 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:32 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:33.005 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:32 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:33.005 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:32 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:33.005 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:32 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:33.005 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:32 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:33.005 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:32 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:33.006 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:32 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:33.006 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:32 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:33.006 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:32 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:33.006 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:32 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:33.006 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:32 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:33.006 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:32 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:33.006 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:32 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:33.006 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:32 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:33.051 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:32 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:33.051 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:32 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:33.051 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:32 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:33.051 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:32 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:33.051 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:32 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:33.051 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:32 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:33.051 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:32 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:33.051 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:32 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:33.051 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:32 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:33.051 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:32 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:33.937 INFO:teuthology.orchestra.run.vm05.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:33.948 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:33.970 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-09T14:28:33.983 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T14:28:34.052 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:34.086 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T14:28:34.102 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-09T14:28:34.102 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-09T14:28:34.157 INFO:teuthology.orchestra.run.vm05.stdout:Building dependency tree... 2026-03-09T14:28:34.157 INFO:teuthology.orchestra.run.vm05.stdout:Reading state information... 2026-03-09T14:28:34.212 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:34.213 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:34.213 INFO:teuthology.orchestra.run.vm04.stdout: libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 libsqlite3-mod-ceph 2026-03-09T14:28:34.213 INFO:teuthology.orchestra.run.vm04.stdout: nvme-cli python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T14:28:34.213 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:34.213 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:34.213 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:34.213 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:34.213 INFO:teuthology.orchestra.run.vm04.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:34.213 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:34.213 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:34.213 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:34.213 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:34.213 INFO:teuthology.orchestra.run.vm04.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:34.213 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:34.213 INFO:teuthology.orchestra.run.vm04.stdout: smartmontools socat xmlstarlet 2026-03-09T14:28:34.213 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:34.224 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-09T14:28:34.225 INFO:teuthology.orchestra.run.vm04.stdout: ceph-base* ceph-common* ceph-mon* ceph-osd* ceph-test* ceph-volume* radosgw* 2026-03-09T14:28:34.247 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T14:28:34.247 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T14:28:34.277 INFO:teuthology.orchestra.run.vm05.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:34.277 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:34.277 INFO:teuthology.orchestra.run.vm05.stdout: libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 libsqlite3-mod-ceph 2026-03-09T14:28:34.277 INFO:teuthology.orchestra.run.vm05.stdout: nvme-cli python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T14:28:34.278 INFO:teuthology.orchestra.run.vm05.stdout: python3-cachetools python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:34.278 INFO:teuthology.orchestra.run.vm05.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:34.278 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:34.278 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:34.278 INFO:teuthology.orchestra.run.vm05.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:34.278 INFO:teuthology.orchestra.run.vm05.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:34.278 INFO:teuthology.orchestra.run.vm05.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:34.278 INFO:teuthology.orchestra.run.vm05.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:34.278 INFO:teuthology.orchestra.run.vm05.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:34.278 INFO:teuthology.orchestra.run.vm05.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:34.278 INFO:teuthology.orchestra.run.vm05.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:34.278 INFO:teuthology.orchestra.run.vm05.stdout: smartmontools socat xmlstarlet 2026-03-09T14:28:34.278 INFO:teuthology.orchestra.run.vm05.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:34.285 INFO:teuthology.orchestra.run.vm05.stdout:The following packages will be REMOVED: 2026-03-09T14:28:34.285 INFO:teuthology.orchestra.run.vm05.stdout: ceph-base* ceph-common* ceph-mon* ceph-osd* ceph-test* ceph-volume* radosgw* 2026-03-09T14:28:34.383 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:34.384 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:34.384 INFO:teuthology.orchestra.run.vm03.stdout: libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 libsqlite3-mod-ceph 2026-03-09T14:28:34.384 INFO:teuthology.orchestra.run.vm03.stdout: nvme-cli python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T14:28:34.384 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:34.384 INFO:teuthology.orchestra.run.vm03.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:34.384 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:34.384 INFO:teuthology.orchestra.run.vm03.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:34.384 INFO:teuthology.orchestra.run.vm03.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:34.384 INFO:teuthology.orchestra.run.vm03.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:34.384 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:34.384 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:34.384 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:34.384 INFO:teuthology.orchestra.run.vm03.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:34.384 INFO:teuthology.orchestra.run.vm03.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:34.384 INFO:teuthology.orchestra.run.vm03.stdout: smartmontools socat xmlstarlet 2026-03-09T14:28:34.384 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:34.394 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-09T14:28:34.394 INFO:teuthology.orchestra.run.vm03.stdout: ceph-base* ceph-common* ceph-mon* ceph-osd* ceph-test* ceph-volume* radosgw* 2026-03-09T14:28:34.427 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-09T14:28:34.427 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 472 MB disk space will be freed. 2026-03-09T14:28:34.448 INFO:teuthology.orchestra.run.vm05.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-09T14:28:34.448 INFO:teuthology.orchestra.run.vm05.stdout:After this operation, 472 MB disk space will be freed. 2026-03-09T14:28:34.460 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118030 files and directories currently installed.) 2026-03-09T14:28:34.461 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:34.487 INFO:teuthology.orchestra.run.vm05.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118030 files and directories currently installed.) 2026-03-09T14:28:34.489 INFO:teuthology.orchestra.run.vm05.stdout:Removing ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:34.496 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:34 vm03 bash[17524]: cluster 2026-03-09T14:28:32.997938+0000 mgr.x (mgr.14150) 508 : cluster [DBG] pgmap v402: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 3 op/s 2026-03-09T14:28:34.496 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:34 vm03 bash[17524]: cluster 2026-03-09T14:28:32.997938+0000 mgr.x (mgr.14150) 508 : cluster [DBG] pgmap v402: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 3 op/s 2026-03-09T14:28:34.525 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:34.552 INFO:teuthology.orchestra.run.vm05.stdout:Removing ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:34.563 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-09T14:28:34.563 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 472 MB disk space will be freed. 2026-03-09T14:28:34.596 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118030 files and directories currently installed.) 2026-03-09T14:28:34.598 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:34.654 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:34.755 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:34 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:34.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:34 vm04 bash[19581]: cluster 2026-03-09T14:28:32.997938+0000 mgr.x (mgr.14150) 508 : cluster [DBG] pgmap v402: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 3 op/s 2026-03-09T14:28:34.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:34 vm04 bash[19581]: cluster 2026-03-09T14:28:32.997938+0000 mgr.x (mgr.14150) 508 : cluster [DBG] pgmap v402: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 3 op/s 2026-03-09T14:28:34.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:34 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:34.755 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:34 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:34.755 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:34 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:34.755 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:34 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:34.755 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:34 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:34.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:34 vm05 bash[20070]: cluster 2026-03-09T14:28:32.997938+0000 mgr.x (mgr.14150) 508 : cluster [DBG] pgmap v402: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 3 op/s 2026-03-09T14:28:34.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:34 vm05 bash[20070]: cluster 2026-03-09T14:28:32.997938+0000 mgr.x (mgr.14150) 508 : cluster [DBG] pgmap v402: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 3 op/s 2026-03-09T14:28:34.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:34 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:34.755 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:34 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:34.756 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:34 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:34.997 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:35.029 INFO:teuthology.orchestra.run.vm05.stdout:Removing ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:35.055 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:34 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.056 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:34 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.056 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:34 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.056 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:34 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.056 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:34 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.087 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:34 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.087 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:34 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.087 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:34 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.087 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:34 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.087 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:35 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.087 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:34 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.157 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:34 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.157 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:34 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.157 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:34 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.157 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:34 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.162 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:35.394 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:35 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.394 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:35 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.394 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:35 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.394 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:35 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.394 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:35 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.405 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:35 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.405 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:35 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.405 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:35 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.406 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:35 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.406 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:35 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.406 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:35 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.406 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:35 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.406 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:35 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.406 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:35 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.435 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:35.451 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:35 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.452 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:35 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.452 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:35 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.452 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:35 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.452 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:35 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.452 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:35 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.452 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:35 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.452 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:35 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.479 INFO:teuthology.orchestra.run.vm05.stdout:Removing ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:35.607 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:35.696 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:35 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.696 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:35 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.696 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:35 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.696 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:35 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.696 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:35 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.696 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:35 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.697 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:35 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.697 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:35 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.697 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:35 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.697 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:35 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.738 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:35 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.739 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:35 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.739 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:35 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.739 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:35 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.739 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:35 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:35 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.755 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:35 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.755 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:35 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.755 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:35 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:35.839 INFO:teuthology.orchestra.run.vm04.stdout:Removing radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:35.907 INFO:teuthology.orchestra.run.vm05.stdout:Removing radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:36.005 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:35 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.005 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:35 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.005 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:35 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.005 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:35 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.005 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:35 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.023 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:35 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.024 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:35 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.024 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:35 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.024 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:35 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.046 INFO:teuthology.orchestra.run.vm03.stdout:Removing radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:36.050 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:35 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.051 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:35 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.051 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:35 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.051 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:35 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.051 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:35 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.051 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:35 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.051 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:35 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.051 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:35 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.051 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:35 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.051 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:35 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.274 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:36 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.274 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:36 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.274 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:36 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.274 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:36 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.274 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:36 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.275 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:36 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.275 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:36 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.313 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:36 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.313 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:36 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.313 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:36 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.313 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:36 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.313 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:36 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.351 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:36.387 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:36.391 INFO:teuthology.orchestra.run.vm05.stdout:Removing ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:36.412 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:36 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.412 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:36 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.412 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:36 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.412 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:36 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.412 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:36 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.428 INFO:teuthology.orchestra.run.vm05.stdout:Removing ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:36.500 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:36.544 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:36.565 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:36 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.565 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:36 vm04 systemd[1]: Stopping Ceph mon.b for 3346de4a-1bc2-11f1-95ae-3796c8433614... 2026-03-09T14:28:36.565 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:36 vm04 bash[19581]: cluster 2026-03-09T14:28:34.998182+0000 mgr.x (mgr.14150) 509 : cluster [DBG] pgmap v403: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 3.5 KiB/s rd, 4 op/s 2026-03-09T14:28:36.565 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:36 vm04 bash[19581]: cluster 2026-03-09T14:28:34.998182+0000 mgr.x (mgr.14150) 509 : cluster [DBG] pgmap v403: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 3.5 KiB/s rd, 4 op/s 2026-03-09T14:28:36.565 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:36 vm04 bash[19581]: debug 2026-03-09T14:28:36.506+0000 7feeb078e640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.b -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-09T14:28:36.565 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:36 vm04 bash[19581]: debug 2026-03-09T14:28:36.506+0000 7feeb078e640 -1 mon.b@2(peon) e3 *** Got Signal Terminated *** 2026-03-09T14:28:36.565 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:36 vm04 systemd[1]: Stopping Ceph osd.2 for 3346de4a-1bc2-11f1-95ae-3796c8433614... 2026-03-09T14:28:36.565 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:36 vm04 bash[22192]: debug 2026-03-09T14:28:36.498+0000 7f375f28d640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.2 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T14:28:36.565 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:36 vm04 bash[22192]: debug 2026-03-09T14:28:36.498+0000 7f375f28d640 -1 osd.2 65 *** Got signal Terminated *** 2026-03-09T14:28:36.565 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:36 vm04 bash[22192]: debug 2026-03-09T14:28:36.498+0000 7f375f28d640 -1 osd.2 65 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T14:28:36.565 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:36 vm04 systemd[1]: Stopping Ceph osd.3 for 3346de4a-1bc2-11f1-95ae-3796c8433614... 2026-03-09T14:28:36.565 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:36 vm04 bash[27913]: debug 2026-03-09T14:28:36.498+0000 7f365196c640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.3 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T14:28:36.565 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:36 vm04 bash[27913]: debug 2026-03-09T14:28:36.498+0000 7f365196c640 -1 osd.3 65 *** Got signal Terminated *** 2026-03-09T14:28:36.565 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:36 vm04 bash[27913]: debug 2026-03-09T14:28:36.498+0000 7f365196c640 -1 osd.3 65 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T14:28:36.565 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:36 vm04 systemd[1]: Stopping Ceph osd.4 for 3346de4a-1bc2-11f1-95ae-3796c8433614... 2026-03-09T14:28:36.565 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:36 vm04 bash[33871]: debug 2026-03-09T14:28:36.502+0000 7f6a4dece640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.4 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T14:28:36.565 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:36 vm04 bash[33871]: debug 2026-03-09T14:28:36.502+0000 7f6a4dece640 -1 osd.4 65 *** Got signal Terminated *** 2026-03-09T14:28:36.565 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:36 vm04 bash[33871]: debug 2026-03-09T14:28:36.502+0000 7f6a4dece640 -1 osd.4 65 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T14:28:36.580 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:36 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.580 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:36 vm05 systemd[1]: Stopping Ceph mon.c for 3346de4a-1bc2-11f1-95ae-3796c8433614... 2026-03-09T14:28:36.580 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:36 vm05 bash[20070]: cluster 2026-03-09T14:28:34.998182+0000 mgr.x (mgr.14150) 509 : cluster [DBG] pgmap v403: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 3.5 KiB/s rd, 4 op/s 2026-03-09T14:28:36.580 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:36 vm05 bash[20070]: cluster 2026-03-09T14:28:34.998182+0000 mgr.x (mgr.14150) 509 : cluster [DBG] pgmap v403: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 3.5 KiB/s rd, 4 op/s 2026-03-09T14:28:36.580 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:36 vm05 bash[20070]: debug 2026-03-09T14:28:36.575+0000 7f3243aa8640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.c -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-09T14:28:36.580 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:36 vm05 bash[20070]: debug 2026-03-09T14:28:36.575+0000 7f3243aa8640 -1 mon.c@1(peon) e3 *** Got Signal Terminated *** 2026-03-09T14:28:36.580 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:36 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.580 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:36 vm05 systemd[1]: Stopping Ceph osd.5 for 3346de4a-1bc2-11f1-95ae-3796c8433614... 2026-03-09T14:28:36.580 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:36 vm05 bash[22975]: debug 2026-03-09T14:28:36.555+0000 7fcc979c0640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.5 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T14:28:36.580 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:36 vm05 bash[22975]: debug 2026-03-09T14:28:36.555+0000 7fcc979c0640 -1 osd.5 65 *** Got signal Terminated *** 2026-03-09T14:28:36.580 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:36 vm05 bash[22975]: debug 2026-03-09T14:28:36.555+0000 7fcc979c0640 -1 osd.5 65 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T14:28:36.580 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:36 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.580 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:36 vm05 systemd[1]: Stopping Ceph osd.6 for 3346de4a-1bc2-11f1-95ae-3796c8433614... 2026-03-09T14:28:36.580 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:36 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.580 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:36 vm05 systemd[1]: Stopping Ceph osd.7 for 3346de4a-1bc2-11f1-95ae-3796c8433614... 2026-03-09T14:28:36.580 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:36 vm05 bash[34675]: debug 2026-03-09T14:28:36.543+0000 7f8f114f7640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.7 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T14:28:36.580 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:36 vm05 bash[34675]: debug 2026-03-09T14:28:36.543+0000 7f8f114f7640 -1 osd.7 65 *** Got signal Terminated *** 2026-03-09T14:28:36.580 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:36 vm05 bash[34675]: debug 2026-03-09T14:28:36.543+0000 7f8f114f7640 -1 osd.7 65 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T14:28:36.581 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:36 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.581 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:36 vm05 systemd[1]: Stopping Ceph iscsi.iscsi.b for 3346de4a-1bc2-11f1-95ae-3796c8433614... 2026-03-09T14:28:36.713 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:36 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.714 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:36 vm03 bash[17524]: cluster 2026-03-09T14:28:34.998182+0000 mgr.x (mgr.14150) 509 : cluster [DBG] pgmap v403: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 3.5 KiB/s rd, 4 op/s 2026-03-09T14:28:36.714 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:36 vm03 bash[17524]: cluster 2026-03-09T14:28:34.998182+0000 mgr.x (mgr.14150) 509 : cluster [DBG] pgmap v403: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 3.5 KiB/s rd, 4 op/s 2026-03-09T14:28:36.714 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:36 vm03 systemd[1]: Stopping Ceph mon.a for 3346de4a-1bc2-11f1-95ae-3796c8433614... 2026-03-09T14:28:36.714 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:36 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.714 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:36 vm03 systemd[1]: Stopping Ceph mgr.x for 3346de4a-1bc2-11f1-95ae-3796c8433614... 2026-03-09T14:28:36.714 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:36 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.714 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:36 vm03 systemd[1]: Stopping Ceph osd.0 for 3346de4a-1bc2-11f1-95ae-3796c8433614... 2026-03-09T14:28:36.714 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:36 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.714 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:36 vm03 systemd[1]: Stopping Ceph osd.1 for 3346de4a-1bc2-11f1-95ae-3796c8433614... 2026-03-09T14:28:36.714 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:36 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.714 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:36 vm03 systemd[1]: Stopping Ceph iscsi.iscsi.a for 3346de4a-1bc2-11f1-95ae-3796c8433614... 2026-03-09T14:28:36.818 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:36 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.818 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:36 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.818 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:36 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.818 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:36 vm04 bash[42011]: ceph-3346de4a-1bc2-11f1-95ae-3796c8433614-mon-b 2026-03-09T14:28:36.818 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:36 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.852 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:36 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.852 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:36 vm05 bash[28959]: debug 2026-03-09T14:28:36.571+0000 7f8591fec640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.6 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T14:28:36.852 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:36 vm05 bash[28959]: debug 2026-03-09T14:28:36.571+0000 7f8591fec640 -1 osd.6 65 *** Got signal Terminated *** 2026-03-09T14:28:36.852 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:36 vm05 bash[28959]: debug 2026-03-09T14:28:36.571+0000 7f8591fec640 -1 osd.6 65 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T14:28:36.852 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:36 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.852 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:36 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.852 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:36 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.853 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:36 vm05 bash[38699]: debug Shutdown received 2026-03-09T14:28:36.853 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:36 vm05 bash[38699]: debug No gateway configuration to remove on this host (vm05.local) 2026-03-09T14:28:36.853 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:36 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.927 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T14:28:36.972 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:36 vm03 bash[17524]: debug 2026-03-09T14:28:36.730+0000 7f20d6809640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-09T14:28:36.972 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:36 vm03 bash[17524]: debug 2026-03-09T14:28:36.730+0000 7f20d6809640 -1 mon.a@0(leader) e3 *** Got Signal Terminated *** 2026-03-09T14:28:36.972 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:36 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.972 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:36 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.972 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:36 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.972 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:36 vm03 bash[27638]: debug 2026-03-09T14:28:36.778+0000 7f2cb5103640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T14:28:36.972 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:36 vm03 bash[27638]: debug 2026-03-09T14:28:36.778+0000 7f2cb5103640 -1 osd.0 65 *** Got signal Terminated *** 2026-03-09T14:28:36.973 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:36 vm03 bash[27638]: debug 2026-03-09T14:28:36.778+0000 7f2cb5103640 -1 osd.0 65 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T14:28:36.973 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:36 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.973 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:36 vm03 bash[33650]: debug 2026-03-09T14:28:36.778+0000 7ff58c568640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.1 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T14:28:36.973 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:36 vm03 bash[33650]: debug 2026-03-09T14:28:36.778+0000 7ff58c568640 -1 osd.1 65 *** Got signal Terminated *** 2026-03-09T14:28:36.973 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:36 vm03 bash[33650]: debug 2026-03-09T14:28:36.778+0000 7ff58c568640 -1 osd.1 65 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T14:28:36.973 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:36 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:36.973 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:36 vm03 bash[37744]: debug Shutdown received 2026-03-09T14:28:36.973 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:36 vm03 bash[37744]: debug No gateway configuration to remove on this host (vm03.local) 2026-03-09T14:28:37.006 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T14:28:37.007 INFO:teuthology.orchestra.run.vm05.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T14:28:37.055 INFO:teuthology.orchestra.run.vm05.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T14:28:37.069 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T14:28:37.080 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117550 files and directories currently installed.) 2026-03-09T14:28:37.082 INFO:teuthology.orchestra.run.vm04.stdout:Purging configuration files for radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:37.095 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:36 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.095 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:36 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.095 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:36 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.095 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:36 vm04 systemd[1]: ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@mon.b.service: Deactivated successfully. 2026-03-09T14:28:37.095 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:36 vm04 systemd[1]: Stopped Ceph mon.b for 3346de4a-1bc2-11f1-95ae-3796c8433614. 2026-03-09T14:28:37.095 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:36 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.121 INFO:teuthology.orchestra.run.vm05.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117550 files and directories currently installed.) 2026-03-09T14:28:37.123 INFO:teuthology.orchestra.run.vm05.stdout:Purging configuration files for radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:37.133 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T14:28:37.134 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:36 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.134 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:36 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.134 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:36 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.134 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:36 vm05 bash[43719]: ceph-3346de4a-1bc2-11f1-95ae-3796c8433614-mon-c 2026-03-09T14:28:37.135 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:36 vm05 systemd[1]: ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@mon.c.service: Deactivated successfully. 2026-03-09T14:28:37.135 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:36 vm05 systemd[1]: Stopped Ceph mon.c for 3346de4a-1bc2-11f1-95ae-3796c8433614. 2026-03-09T14:28:37.135 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:36 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.135 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:36 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.135 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:36 vm05 bash[43730]: ceph-3346de4a-1bc2-11f1-95ae-3796c8433614-iscsi-iscsi-b 2026-03-09T14:28:37.135 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:37 vm05 systemd[1]: ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@iscsi.iscsi.b.service: Deactivated successfully. 2026-03-09T14:28:37.135 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:37 vm05 systemd[1]: Stopped Ceph iscsi.iscsi.b for 3346de4a-1bc2-11f1-95ae-3796c8433614. 2026-03-09T14:28:37.204 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117550 files and directories currently installed.) 2026-03-09T14:28:37.206 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:37.302 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:36 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.302 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:36 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.302 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:37 vm03 bash[49438]: ceph-3346de4a-1bc2-11f1-95ae-3796c8433614-iscsi-iscsi-a 2026-03-09T14:28:37.302 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:37 vm03 systemd[1]: ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@iscsi.iscsi.a.service: Deactivated successfully. 2026-03-09T14:28:37.302 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:37 vm03 systemd[1]: Stopped Ceph iscsi.iscsi.a for 3346de4a-1bc2-11f1-95ae-3796c8433614. 2026-03-09T14:28:37.302 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:36 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.303 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:37 vm03 bash[49364]: ceph-3346de4a-1bc2-11f1-95ae-3796c8433614-mgr-x 2026-03-09T14:28:37.303 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:37 vm03 systemd[1]: ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@mgr.x.service: Deactivated successfully. 2026-03-09T14:28:37.303 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:37 vm03 systemd[1]: Stopped Ceph mgr.x for 3346de4a-1bc2-11f1-95ae-3796c8433614. 2026-03-09T14:28:37.303 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:36 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.303 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:37 vm03 bash[49392]: ceph-3346de4a-1bc2-11f1-95ae-3796c8433614-mon-a 2026-03-09T14:28:37.303 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:37 vm03 systemd[1]: ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@mon.a.service: Deactivated successfully. 2026-03-09T14:28:37.303 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:37 vm03 systemd[1]: Stopped Ceph mon.a for 3346de4a-1bc2-11f1-95ae-3796c8433614. 2026-03-09T14:28:37.303 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:36 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.444 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:37 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.444 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:37 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.445 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:37 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.445 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:37 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.505 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:37 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.505 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:37 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.505 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:37 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.505 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:37 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.505 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:37 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.608 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:37 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.608 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:37 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.608 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:37 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.608 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:37 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.608 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:37 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.608 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:37 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.609 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:37 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.609 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:37 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.609 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:37 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.609 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:37 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.710 INFO:teuthology.orchestra.run.vm04.stdout:Purging configuration files for ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:37.735 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:37 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.735 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:37 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.736 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:37 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.736 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:37 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.736 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:37 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.736 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:37 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.736 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:37 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.736 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:37 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:37 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.755 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:37 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.755 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:37 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.755 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:37 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.755 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:37 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.755 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:37 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.755 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:37 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.755 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:37 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.817 INFO:teuthology.orchestra.run.vm05.stdout:Purging configuration files for ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:37.818 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:37.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:37 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.973 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:37 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.973 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:37 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.973 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:37 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:37.973 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:37 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.005 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:37 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.005 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:37 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.005 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:37 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.005 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:37 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.044 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:37 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.044 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:37 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.044 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:37 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.044 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:37 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.044 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:37 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.044 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:37 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.044 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:37 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.170 INFO:teuthology.orchestra.run.vm04.stdout:Purging configuration files for ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:38.227 INFO:teuthology.orchestra.run.vm05.stdout:Purging configuration files for ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:38.265 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:38.295 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:37 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.295 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:38 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.295 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:37 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.295 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:38 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.295 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:37 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.295 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:38 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.295 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:37 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.295 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:38 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.295 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:37 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.295 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:38 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.375 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:38 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.375 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:38 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.375 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:38 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.375 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:38 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.375 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:38 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.401 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:38 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.401 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:38 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.401 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:38 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.401 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:38 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.401 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:38 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.401 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:38 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.401 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:38 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.401 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:38 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.550 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:38 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.551 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:38 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.551 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:38 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.551 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:38 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.551 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:38 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.600 INFO:teuthology.orchestra.run.vm04.stdout:Purging configuration files for ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:38.664 INFO:teuthology.orchestra.run.vm05.stdout:Purging configuration files for ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:38.675 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:38 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.675 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:38 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.675 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:38 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.675 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:38 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.675 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:38 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.675 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:38 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.675 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:38 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.675 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:38 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.675 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:38 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.675 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:38 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.738 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:38.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:38 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.755 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:38 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.755 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:38 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.755 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:38 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.917 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:38 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.917 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:38 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.917 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:38 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.917 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:38 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:38.917 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:38 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.005 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:38 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.005 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:38 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.005 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:38 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.005 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:38 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.005 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:38 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.019 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:38 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.019 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:38 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.019 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:38 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.019 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:38 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.019 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:39 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.085 INFO:teuthology.orchestra.run.vm04.stdout:dpkg: warning: while removing ceph-common, directory '/var/lib/ceph' not empty so not removed 2026-03-09T14:28:39.093 INFO:teuthology.orchestra.run.vm04.stdout:Purging configuration files for ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:39.095 INFO:teuthology.orchestra.run.vm05.stdout:dpkg: warning: while removing ceph-common, directory '/var/lib/ceph' not empty so not removed 2026-03-09T14:28:39.103 INFO:teuthology.orchestra.run.vm05.stdout:Purging configuration files for ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:39.167 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:38 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.167 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:39 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.168 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:38 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.168 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:39 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.168 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:38 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.168 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:39 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.168 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:38 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.168 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:39 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.168 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:38 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.168 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:39 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.233 INFO:teuthology.orchestra.run.vm03.stdout:dpkg: warning: while removing ceph-common, directory '/var/lib/ceph' not empty so not removed 2026-03-09T14:28:39.241 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:39.335 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:39 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.335 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:39 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.335 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:39 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.335 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:39 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.335 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:39 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.335 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:39 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.335 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:39 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.349 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:39 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.349 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:39 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.349 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:39 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.349 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:39 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.349 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:39 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.349 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:39 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.349 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:39 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.349 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:39 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.349 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:39 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.349 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:39 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.471 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:39 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.471 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:39 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.471 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:39 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.471 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:39 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.471 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:39 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.755 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:39 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.755 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:39 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.755 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:39 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.755 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:39 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.755 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:39 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.755 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:39 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.755 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:39 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.755 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:39 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.755 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:39 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.801 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:39 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.801 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:39 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.801 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:39 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.801 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:39 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:39.801 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:39 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:40.514 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:40.527 INFO:teuthology.orchestra.run.vm05.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:40.546 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T14:28:40.562 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-09T14:28:40.640 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:40.673 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T14:28:40.719 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-09T14:28:40.720 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-09T14:28:40.734 INFO:teuthology.orchestra.run.vm05.stdout:Building dependency tree... 2026-03-09T14:28:40.735 INFO:teuthology.orchestra.run.vm05.stdout:Reading state information... 2026-03-09T14:28:40.847 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:40.847 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:40.847 INFO:teuthology.orchestra.run.vm04.stdout: libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 libsqlite3-mod-ceph 2026-03-09T14:28:40.847 INFO:teuthology.orchestra.run.vm04.stdout: nvme-cli python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T14:28:40.847 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:40.847 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:40.847 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:40.847 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:40.848 INFO:teuthology.orchestra.run.vm04.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:40.848 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:40.848 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:40.848 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:40.848 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:40.848 INFO:teuthology.orchestra.run.vm04.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:40.848 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:40.848 INFO:teuthology.orchestra.run.vm04.stdout: smartmontools socat xmlstarlet 2026-03-09T14:28:40.848 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:40.859 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-09T14:28:40.860 INFO:teuthology.orchestra.run.vm04.stdout: ceph-fuse* 2026-03-09T14:28:40.861 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T14:28:40.861 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T14:28:40.868 INFO:teuthology.orchestra.run.vm05.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:40.869 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:40.869 INFO:teuthology.orchestra.run.vm05.stdout: libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 libsqlite3-mod-ceph 2026-03-09T14:28:40.869 INFO:teuthology.orchestra.run.vm05.stdout: nvme-cli python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T14:28:40.869 INFO:teuthology.orchestra.run.vm05.stdout: python3-cachetools python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:40.869 INFO:teuthology.orchestra.run.vm05.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:40.869 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:40.869 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:40.869 INFO:teuthology.orchestra.run.vm05.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:40.869 INFO:teuthology.orchestra.run.vm05.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:40.869 INFO:teuthology.orchestra.run.vm05.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:40.869 INFO:teuthology.orchestra.run.vm05.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:40.870 INFO:teuthology.orchestra.run.vm05.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:40.870 INFO:teuthology.orchestra.run.vm05.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:40.870 INFO:teuthology.orchestra.run.vm05.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:40.870 INFO:teuthology.orchestra.run.vm05.stdout: smartmontools socat xmlstarlet 2026-03-09T14:28:40.870 INFO:teuthology.orchestra.run.vm05.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:40.882 INFO:teuthology.orchestra.run.vm05.stdout:The following packages will be REMOVED: 2026-03-09T14:28:40.883 INFO:teuthology.orchestra.run.vm05.stdout: ceph-fuse* 2026-03-09T14:28:40.983 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:40.983 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:40.983 INFO:teuthology.orchestra.run.vm03.stdout: libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 libsqlite3-mod-ceph 2026-03-09T14:28:40.983 INFO:teuthology.orchestra.run.vm03.stdout: nvme-cli python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T14:28:40.983 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:40.983 INFO:teuthology.orchestra.run.vm03.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:40.983 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:40.983 INFO:teuthology.orchestra.run.vm03.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:40.983 INFO:teuthology.orchestra.run.vm03.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:40.983 INFO:teuthology.orchestra.run.vm03.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:40.983 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:40.983 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:40.984 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:40.984 INFO:teuthology.orchestra.run.vm03.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:40.984 INFO:teuthology.orchestra.run.vm03.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:40.984 INFO:teuthology.orchestra.run.vm03.stdout: smartmontools socat xmlstarlet 2026-03-09T14:28:40.984 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:40.992 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-09T14:28:40.993 INFO:teuthology.orchestra.run.vm03.stdout: ceph-fuse* 2026-03-09T14:28:41.025 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T14:28:41.025 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 3673 kB disk space will be freed. 2026-03-09T14:28:41.045 INFO:teuthology.orchestra.run.vm05.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T14:28:41.046 INFO:teuthology.orchestra.run.vm05.stdout:After this operation, 3673 kB disk space will be freed. 2026-03-09T14:28:41.060 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117536 files and directories currently installed.) 2026-03-09T14:28:41.062 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:41.082 INFO:teuthology.orchestra.run.vm05.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117536 files and directories currently installed.) 2026-03-09T14:28:41.084 INFO:teuthology.orchestra.run.vm05.stdout:Removing ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:41.147 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T14:28:41.147 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 3673 kB disk space will be freed. 2026-03-09T14:28:41.181 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117536 files and directories currently installed.) 2026-03-09T14:28:41.183 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:41.436 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:41 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:41.436 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:41 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:41.436 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:41 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:41.436 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:41 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:41.436 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:41 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:41.466 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:41 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:41.466 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:41 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:41.466 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:41 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:41.466 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:41 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:41.504 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:41 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:41.504 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:41 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:41.504 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:41 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:41.504 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:41 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:41.504 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:41 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:41.521 INFO:teuthology.orchestra.run.vm05.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T14:28:41.566 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T14:28:41.602 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T14:28:41.668 INFO:teuthology.orchestra.run.vm05.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117527 files and directories currently installed.) 2026-03-09T14:28:41.670 INFO:teuthology.orchestra.run.vm05.stdout:Purging configuration files for ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:41.712 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:41 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:41.713 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:41 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:41.713 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:41 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:41.713 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:41 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:41.713 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:41 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:41.718 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:41 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:41.720 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:41 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:41.721 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:41 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:41.721 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:41 vm04 bash[42017]: ceph-3346de4a-1bc2-11f1-95ae-3796c8433614-osd-4 2026-03-09T14:28:41.721 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:41 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:41.801 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:41 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:41.801 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:41 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:41.801 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:41 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:41.801 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:41 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:41.801 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:41 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:41.912 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117527 files and directories currently installed.) 2026-03-09T14:28:41.916 INFO:teuthology.orchestra.run.vm04.stdout:Purging configuration files for ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:41.923 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117527 files and directories currently installed.) 2026-03-09T14:28:41.926 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:41.982 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:41 vm05 bash[43695]: ceph-3346de4a-1bc2-11f1-95ae-3796c8433614-osd-5 2026-03-09T14:28:41.982 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:41 vm05 bash[43714]: ceph-3346de4a-1bc2-11f1-95ae-3796c8433614-osd-6 2026-03-09T14:28:41.982 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:41 vm05 bash[43663]: ceph-3346de4a-1bc2-11f1-95ae-3796c8433614-osd-7 2026-03-09T14:28:42.006 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:41 vm04 bash[42004]: ceph-3346de4a-1bc2-11f1-95ae-3796c8433614-osd-2 2026-03-09T14:28:42.006 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:41 vm04 bash[42028]: ceph-3346de4a-1bc2-11f1-95ae-3796c8433614-osd-3 2026-03-09T14:28:42.006 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:41 vm04 bash[42854]: Error response from daemon: No such container: ceph-3346de4a-1bc2-11f1-95ae-3796c8433614-osd-3 2026-03-09T14:28:42.170 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:41 vm03 bash[49450]: ceph-3346de4a-1bc2-11f1-95ae-3796c8433614-osd-0 2026-03-09T14:28:42.170 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:41 vm03 bash[49372]: ceph-3346de4a-1bc2-11f1-95ae-3796c8433614-osd-1 2026-03-09T14:28:42.236 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:42 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:42.237 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:42 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:42.237 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:42 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:42.237 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:42 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:42.237 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:42 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:42.379 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:42 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:42.380 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:42 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:42.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:42 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:42.380 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:42 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:42.428 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:42 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:42.428 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:42 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:42.428 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:42 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:42.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:42 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:42.428 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:42 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:42.505 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:42 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:42.505 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:42 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:42.505 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:42 vm05 systemd[1]: ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@osd.5.service: Deactivated successfully. 2026-03-09T14:28:42.505 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:42 vm05 systemd[1]: Stopped Ceph osd.5 for 3346de4a-1bc2-11f1-95ae-3796c8433614. 2026-03-09T14:28:42.505 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:42 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:42.505 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:42 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:42.505 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:42 vm05 systemd[1]: ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@osd.7.service: Deactivated successfully. 2026-03-09T14:28:42.505 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:42 vm05 systemd[1]: Stopped Ceph osd.7 for 3346de4a-1bc2-11f1-95ae-3796c8433614. 2026-03-09T14:28:42.506 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:42 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:42.654 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:42 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:42.654 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:42 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:42.654 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:42 vm04 systemd[1]: ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@osd.2.service: Deactivated successfully. 2026-03-09T14:28:42.654 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:42 vm04 systemd[1]: Stopped Ceph osd.2 for 3346de4a-1bc2-11f1-95ae-3796c8433614. 2026-03-09T14:28:42.654 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:42 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:42.654 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:42 vm04 systemd[1]: ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@osd.3.service: Deactivated successfully. 2026-03-09T14:28:42.655 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:42 vm04 systemd[1]: Stopped Ceph osd.3 for 3346de4a-1bc2-11f1-95ae-3796c8433614. 2026-03-09T14:28:42.655 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:42 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:42.801 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:42 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:42.801 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:42 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:42.801 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:42 vm03 systemd[1]: ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@osd.0.service: Deactivated successfully. 2026-03-09T14:28:42.801 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:42 vm03 systemd[1]: Stopped Ceph osd.0 for 3346de4a-1bc2-11f1-95ae-3796c8433614. 2026-03-09T14:28:42.801 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:42 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:42.801 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:42 vm03 systemd[1]: ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@osd.1.service: Deactivated successfully. 2026-03-09T14:28:42.801 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:42 vm03 systemd[1]: Stopped Ceph osd.1 for 3346de4a-1bc2-11f1-95ae-3796c8433614. 2026-03-09T14:28:42.801 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:42 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:42.801 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:42 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:43.005 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:42 vm04 systemd[1]: ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@osd.4.service: Deactivated successfully. 2026-03-09T14:28:43.005 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:42 vm04 systemd[1]: Stopped Ceph osd.4 for 3346de4a-1bc2-11f1-95ae-3796c8433614. 2026-03-09T14:28:43.005 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:42 vm05 systemd[1]: ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@osd.6.service: Deactivated successfully. 2026-03-09T14:28:43.005 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:42 vm05 systemd[1]: Stopped Ceph osd.6 for 3346de4a-1bc2-11f1-95ae-3796c8433614. 2026-03-09T14:28:43.409 INFO:teuthology.orchestra.run.vm05.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:43.440 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-09T14:28:43.504 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:43.511 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:43.536 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T14:28:43.543 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T14:28:43.614 INFO:teuthology.orchestra.run.vm05.stdout:Building dependency tree... 2026-03-09T14:28:43.614 INFO:teuthology.orchestra.run.vm05.stdout:Reading state information... 2026-03-09T14:28:43.712 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-09T14:28:43.712 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-09T14:28:43.720 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T14:28:43.720 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T14:28:43.726 INFO:teuthology.orchestra.run.vm05.stdout:Package 'ceph-test' is not installed, so not removed 2026-03-09T14:28:43.726 INFO:teuthology.orchestra.run.vm05.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:43.726 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:43.726 INFO:teuthology.orchestra.run.vm05.stdout: libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 libsqlite3-mod-ceph 2026-03-09T14:28:43.727 INFO:teuthology.orchestra.run.vm05.stdout: nvme-cli python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T14:28:43.727 INFO:teuthology.orchestra.run.vm05.stdout: python3-cachetools python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:43.727 INFO:teuthology.orchestra.run.vm05.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:43.727 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:43.727 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:43.727 INFO:teuthology.orchestra.run.vm05.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:43.727 INFO:teuthology.orchestra.run.vm05.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:43.727 INFO:teuthology.orchestra.run.vm05.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:43.727 INFO:teuthology.orchestra.run.vm05.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:43.727 INFO:teuthology.orchestra.run.vm05.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:43.727 INFO:teuthology.orchestra.run.vm05.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:43.727 INFO:teuthology.orchestra.run.vm05.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:43.727 INFO:teuthology.orchestra.run.vm05.stdout: smartmontools socat xmlstarlet 2026-03-09T14:28:43.727 INFO:teuthology.orchestra.run.vm05.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:43.748 INFO:teuthology.orchestra.run.vm05.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T14:28:43.748 INFO:teuthology.orchestra.run.vm05.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:43.779 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-09T14:28:43.861 INFO:teuthology.orchestra.run.vm04.stdout:Package 'ceph-test' is not installed, so not removed 2026-03-09T14:28:43.861 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:43.861 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:43.861 INFO:teuthology.orchestra.run.vm04.stdout: libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 libsqlite3-mod-ceph 2026-03-09T14:28:43.861 INFO:teuthology.orchestra.run.vm04.stdout: nvme-cli python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T14:28:43.861 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:43.862 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:43.862 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:43.862 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:43.862 INFO:teuthology.orchestra.run.vm04.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:43.862 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:43.862 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:43.862 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:43.862 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:43.862 INFO:teuthology.orchestra.run.vm04.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:43.862 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:43.862 INFO:teuthology.orchestra.run.vm04.stdout: smartmontools socat xmlstarlet 2026-03-09T14:28:43.862 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:43.871 INFO:teuthology.orchestra.run.vm03.stdout:Package 'ceph-test' is not installed, so not removed 2026-03-09T14:28:43.871 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:43.871 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:43.871 INFO:teuthology.orchestra.run.vm03.stdout: libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 libsqlite3-mod-ceph 2026-03-09T14:28:43.871 INFO:teuthology.orchestra.run.vm03.stdout: nvme-cli python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T14:28:43.871 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:43.871 INFO:teuthology.orchestra.run.vm03.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:43.871 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:43.871 INFO:teuthology.orchestra.run.vm03.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:43.871 INFO:teuthology.orchestra.run.vm03.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:43.871 INFO:teuthology.orchestra.run.vm03.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:43.871 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:43.871 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:43.871 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:43.872 INFO:teuthology.orchestra.run.vm03.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:43.872 INFO:teuthology.orchestra.run.vm03.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:43.872 INFO:teuthology.orchestra.run.vm03.stdout: smartmontools socat xmlstarlet 2026-03-09T14:28:43.872 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:43.883 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T14:28:43.883 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:43.891 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T14:28:43.891 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:43.914 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T14:28:43.922 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T14:28:43.971 INFO:teuthology.orchestra.run.vm05.stdout:Building dependency tree... 2026-03-09T14:28:43.971 INFO:teuthology.orchestra.run.vm05.stdout:Reading state information... 2026-03-09T14:28:44.082 INFO:teuthology.orchestra.run.vm05.stdout:Package 'ceph-volume' is not installed, so not removed 2026-03-09T14:28:44.083 INFO:teuthology.orchestra.run.vm05.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:44.083 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:44.083 INFO:teuthology.orchestra.run.vm05.stdout: libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 libsqlite3-mod-ceph 2026-03-09T14:28:44.083 INFO:teuthology.orchestra.run.vm05.stdout: nvme-cli python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T14:28:44.083 INFO:teuthology.orchestra.run.vm05.stdout: python3-cachetools python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:44.083 INFO:teuthology.orchestra.run.vm05.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:44.083 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:44.083 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:44.083 INFO:teuthology.orchestra.run.vm05.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:44.083 INFO:teuthology.orchestra.run.vm05.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:44.083 INFO:teuthology.orchestra.run.vm05.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:44.083 INFO:teuthology.orchestra.run.vm05.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:44.083 INFO:teuthology.orchestra.run.vm05.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:44.083 INFO:teuthology.orchestra.run.vm05.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:44.083 INFO:teuthology.orchestra.run.vm05.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:44.083 INFO:teuthology.orchestra.run.vm05.stdout: smartmontools socat xmlstarlet 2026-03-09T14:28:44.083 INFO:teuthology.orchestra.run.vm05.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:44.092 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-09T14:28:44.093 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-09T14:28:44.099 INFO:teuthology.orchestra.run.vm05.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T14:28:44.099 INFO:teuthology.orchestra.run.vm05.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:44.101 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T14:28:44.102 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T14:28:44.130 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-09T14:28:44.225 INFO:teuthology.orchestra.run.vm04.stdout:Package 'ceph-volume' is not installed, so not removed 2026-03-09T14:28:44.225 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:44.225 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:44.226 INFO:teuthology.orchestra.run.vm04.stdout: libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 libsqlite3-mod-ceph 2026-03-09T14:28:44.226 INFO:teuthology.orchestra.run.vm04.stdout: nvme-cli python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T14:28:44.226 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:44.226 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:44.226 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:44.226 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:44.226 INFO:teuthology.orchestra.run.vm04.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:44.226 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:44.226 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:44.226 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:44.226 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:44.226 INFO:teuthology.orchestra.run.vm04.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:44.226 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:44.226 INFO:teuthology.orchestra.run.vm04.stdout: smartmontools socat xmlstarlet 2026-03-09T14:28:44.226 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:44.247 INFO:teuthology.orchestra.run.vm03.stdout:Package 'ceph-volume' is not installed, so not removed 2026-03-09T14:28:44.247 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:44.247 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:44.247 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T14:28:44.247 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:44.247 INFO:teuthology.orchestra.run.vm03.stdout: libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 libsqlite3-mod-ceph 2026-03-09T14:28:44.247 INFO:teuthology.orchestra.run.vm03.stdout: nvme-cli python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T14:28:44.247 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:44.247 INFO:teuthology.orchestra.run.vm03.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:44.247 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:44.247 INFO:teuthology.orchestra.run.vm03.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:44.247 INFO:teuthology.orchestra.run.vm03.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:44.247 INFO:teuthology.orchestra.run.vm03.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:44.247 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:44.248 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:44.248 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:44.248 INFO:teuthology.orchestra.run.vm03.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:44.248 INFO:teuthology.orchestra.run.vm03.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:44.248 INFO:teuthology.orchestra.run.vm03.stdout: smartmontools socat xmlstarlet 2026-03-09T14:28:44.248 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:44.265 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T14:28:44.265 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:44.278 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T14:28:44.296 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T14:28:44.324 INFO:teuthology.orchestra.run.vm05.stdout:Building dependency tree... 2026-03-09T14:28:44.325 INFO:teuthology.orchestra.run.vm05.stdout:Reading state information... 2026-03-09T14:28:44.438 INFO:teuthology.orchestra.run.vm05.stdout:Package 'radosgw' is not installed, so not removed 2026-03-09T14:28:44.438 INFO:teuthology.orchestra.run.vm05.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:44.438 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:44.438 INFO:teuthology.orchestra.run.vm05.stdout: libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 libsqlite3-mod-ceph 2026-03-09T14:28:44.438 INFO:teuthology.orchestra.run.vm05.stdout: nvme-cli python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T14:28:44.438 INFO:teuthology.orchestra.run.vm05.stdout: python3-cachetools python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:44.438 INFO:teuthology.orchestra.run.vm05.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:44.438 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:44.438 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:44.438 INFO:teuthology.orchestra.run.vm05.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:44.438 INFO:teuthology.orchestra.run.vm05.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:44.438 INFO:teuthology.orchestra.run.vm05.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:44.438 INFO:teuthology.orchestra.run.vm05.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:44.438 INFO:teuthology.orchestra.run.vm05.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:44.439 INFO:teuthology.orchestra.run.vm05.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:44.439 INFO:teuthology.orchestra.run.vm05.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:44.439 INFO:teuthology.orchestra.run.vm05.stdout: smartmontools socat xmlstarlet 2026-03-09T14:28:44.439 INFO:teuthology.orchestra.run.vm05.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:44.455 INFO:teuthology.orchestra.run.vm05.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T14:28:44.455 INFO:teuthology.orchestra.run.vm05.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:44.459 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-09T14:28:44.459 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-09T14:28:44.478 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T14:28:44.478 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T14:28:44.486 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-09T14:28:44.612 INFO:teuthology.orchestra.run.vm04.stdout:Package 'radosgw' is not installed, so not removed 2026-03-09T14:28:44.612 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:44.612 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:44.612 INFO:teuthology.orchestra.run.vm04.stdout: libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 libsqlite3-mod-ceph 2026-03-09T14:28:44.612 INFO:teuthology.orchestra.run.vm04.stdout: nvme-cli python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T14:28:44.612 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:44.612 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:44.613 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:44.613 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:44.613 INFO:teuthology.orchestra.run.vm04.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:44.613 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:44.613 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:44.613 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:44.613 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:44.613 INFO:teuthology.orchestra.run.vm04.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:44.613 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:44.613 INFO:teuthology.orchestra.run.vm04.stdout: smartmontools socat xmlstarlet 2026-03-09T14:28:44.613 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:44.634 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T14:28:44.634 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:44.649 INFO:teuthology.orchestra.run.vm03.stdout:Package 'radosgw' is not installed, so not removed 2026-03-09T14:28:44.649 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:44.649 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:44.650 INFO:teuthology.orchestra.run.vm03.stdout: libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 libsqlite3-mod-ceph 2026-03-09T14:28:44.650 INFO:teuthology.orchestra.run.vm03.stdout: nvme-cli python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T14:28:44.650 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:44.650 INFO:teuthology.orchestra.run.vm03.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:44.650 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:44.650 INFO:teuthology.orchestra.run.vm03.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:44.650 INFO:teuthology.orchestra.run.vm03.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:44.650 INFO:teuthology.orchestra.run.vm03.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:44.650 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:44.650 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:44.650 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:44.650 INFO:teuthology.orchestra.run.vm03.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:44.651 INFO:teuthology.orchestra.run.vm03.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:44.651 INFO:teuthology.orchestra.run.vm03.stdout: smartmontools socat xmlstarlet 2026-03-09T14:28:44.651 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:44.666 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T14:28:44.677 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T14:28:44.677 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:44.695 INFO:teuthology.orchestra.run.vm05.stdout:Building dependency tree... 2026-03-09T14:28:44.695 INFO:teuthology.orchestra.run.vm05.stdout:Reading state information... 2026-03-09T14:28:44.710 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T14:28:44.839 INFO:teuthology.orchestra.run.vm05.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:44.839 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:44.839 INFO:teuthology.orchestra.run.vm05.stdout: libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T14:28:44.839 INFO:teuthology.orchestra.run.vm05.stdout: librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph lua-any lua-sec 2026-03-09T14:28:44.839 INFO:teuthology.orchestra.run.vm05.stdout: lua-socket lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T14:28:44.839 INFO:teuthology.orchestra.run.vm05.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T14:28:44.839 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:44.839 INFO:teuthology.orchestra.run.vm05.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:44.839 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:44.839 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:44.839 INFO:teuthology.orchestra.run.vm05.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:44.839 INFO:teuthology.orchestra.run.vm05.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:44.839 INFO:teuthology.orchestra.run.vm05.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:44.839 INFO:teuthology.orchestra.run.vm05.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:44.839 INFO:teuthology.orchestra.run.vm05.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:44.839 INFO:teuthology.orchestra.run.vm05.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:44.839 INFO:teuthology.orchestra.run.vm05.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:44.839 INFO:teuthology.orchestra.run.vm05.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T14:28:44.839 INFO:teuthology.orchestra.run.vm05.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:44.850 INFO:teuthology.orchestra.run.vm05.stdout:The following packages will be REMOVED: 2026-03-09T14:28:44.850 INFO:teuthology.orchestra.run.vm05.stdout: python3-cephfs* python3-rados* python3-rgw* 2026-03-09T14:28:44.863 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-09T14:28:44.864 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-09T14:28:44.908 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T14:28:44.908 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T14:28:45.025 INFO:teuthology.orchestra.run.vm05.stdout:0 upgraded, 0 newly installed, 3 to remove and 10 not upgraded. 2026-03-09T14:28:45.025 INFO:teuthology.orchestra.run.vm05.stdout:After this operation, 2062 kB disk space will be freed. 2026-03-09T14:28:45.052 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:45.052 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:45.052 INFO:teuthology.orchestra.run.vm04.stdout: libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T14:28:45.052 INFO:teuthology.orchestra.run.vm04.stdout: librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph lua-any lua-sec 2026-03-09T14:28:45.053 INFO:teuthology.orchestra.run.vm04.stdout: lua-socket lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T14:28:45.053 INFO:teuthology.orchestra.run.vm04.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T14:28:45.053 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:45.053 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:45.053 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:45.053 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:45.053 INFO:teuthology.orchestra.run.vm04.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:45.053 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:45.053 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:45.053 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:45.053 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:45.053 INFO:teuthology.orchestra.run.vm04.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:45.053 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:45.053 INFO:teuthology.orchestra.run.vm04.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T14:28:45.053 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:45.066 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-09T14:28:45.066 INFO:teuthology.orchestra.run.vm04.stdout: python3-cephfs* python3-rados* python3-rgw* 2026-03-09T14:28:45.066 INFO:teuthology.orchestra.run.vm05.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117527 files and directories currently installed.) 2026-03-09T14:28:45.069 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:45.080 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:45.085 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:45.085 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:45.085 INFO:teuthology.orchestra.run.vm03.stdout: libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T14:28:45.086 INFO:teuthology.orchestra.run.vm03.stdout: librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph lua-any lua-sec 2026-03-09T14:28:45.086 INFO:teuthology.orchestra.run.vm03.stdout: lua-socket lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T14:28:45.086 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T14:28:45.086 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:45.086 INFO:teuthology.orchestra.run.vm03.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:45.086 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:45.086 INFO:teuthology.orchestra.run.vm03.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:45.086 INFO:teuthology.orchestra.run.vm03.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:45.086 INFO:teuthology.orchestra.run.vm03.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:45.086 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:45.086 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:45.086 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:45.086 INFO:teuthology.orchestra.run.vm03.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:45.086 INFO:teuthology.orchestra.run.vm03.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:45.087 INFO:teuthology.orchestra.run.vm03.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T14:28:45.087 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:45.091 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:45.098 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-09T14:28:45.098 INFO:teuthology.orchestra.run.vm03.stdout: python3-cephfs* python3-rados* python3-rgw* 2026-03-09T14:28:45.239 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 3 to remove and 10 not upgraded. 2026-03-09T14:28:45.239 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 2062 kB disk space will be freed. 2026-03-09T14:28:45.257 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 3 to remove and 10 not upgraded. 2026-03-09T14:28:45.257 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 2062 kB disk space will be freed. 2026-03-09T14:28:45.275 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117527 files and directories currently installed.) 2026-03-09T14:28:45.277 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:45.287 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:45.289 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117527 files and directories currently installed.) 2026-03-09T14:28:45.290 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:45.297 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:45.300 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:45.311 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:46.178 INFO:teuthology.orchestra.run.vm05.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:46.210 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-09T14:28:46.334 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:46.350 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:46.366 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T14:28:46.377 INFO:teuthology.orchestra.run.vm05.stdout:Building dependency tree... 2026-03-09T14:28:46.377 INFO:teuthology.orchestra.run.vm05.stdout:Reading state information... 2026-03-09T14:28:46.381 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T14:28:46.497 INFO:teuthology.orchestra.run.vm05.stdout:Package 'python3-rgw' is not installed, so not removed 2026-03-09T14:28:46.497 INFO:teuthology.orchestra.run.vm05.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:46.497 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:46.497 INFO:teuthology.orchestra.run.vm05.stdout: libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T14:28:46.497 INFO:teuthology.orchestra.run.vm05.stdout: librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph lua-any lua-sec 2026-03-09T14:28:46.497 INFO:teuthology.orchestra.run.vm05.stdout: lua-socket lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T14:28:46.497 INFO:teuthology.orchestra.run.vm05.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T14:28:46.497 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:46.497 INFO:teuthology.orchestra.run.vm05.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:46.497 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:46.497 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:46.497 INFO:teuthology.orchestra.run.vm05.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:46.498 INFO:teuthology.orchestra.run.vm05.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:46.498 INFO:teuthology.orchestra.run.vm05.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:46.498 INFO:teuthology.orchestra.run.vm05.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:46.498 INFO:teuthology.orchestra.run.vm05.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:46.498 INFO:teuthology.orchestra.run.vm05.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:46.498 INFO:teuthology.orchestra.run.vm05.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:46.498 INFO:teuthology.orchestra.run.vm05.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T14:28:46.498 INFO:teuthology.orchestra.run.vm05.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:46.514 INFO:teuthology.orchestra.run.vm05.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T14:28:46.514 INFO:teuthology.orchestra.run.vm05.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:46.545 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-09T14:28:46.550 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-09T14:28:46.551 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-09T14:28:46.563 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T14:28:46.563 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T14:28:46.677 INFO:teuthology.orchestra.run.vm04.stdout:Package 'python3-rgw' is not installed, so not removed 2026-03-09T14:28:46.677 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:46.677 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:46.677 INFO:teuthology.orchestra.run.vm04.stdout: libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T14:28:46.677 INFO:teuthology.orchestra.run.vm04.stdout: librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph lua-any lua-sec 2026-03-09T14:28:46.678 INFO:teuthology.orchestra.run.vm04.stdout: lua-socket lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T14:28:46.678 INFO:teuthology.orchestra.run.vm04.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T14:28:46.678 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:46.678 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:46.678 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:46.678 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:46.678 INFO:teuthology.orchestra.run.vm04.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:46.678 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:46.678 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:46.678 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:46.678 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:46.678 INFO:teuthology.orchestra.run.vm04.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:46.678 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:46.678 INFO:teuthology.orchestra.run.vm04.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T14:28:46.678 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:46.698 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T14:28:46.698 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:46.699 INFO:teuthology.orchestra.run.vm03.stdout:Package 'python3-rgw' is not installed, so not removed 2026-03-09T14:28:46.700 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:46.700 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:46.700 INFO:teuthology.orchestra.run.vm03.stdout: libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T14:28:46.700 INFO:teuthology.orchestra.run.vm03.stdout: librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph lua-any lua-sec 2026-03-09T14:28:46.700 INFO:teuthology.orchestra.run.vm03.stdout: lua-socket lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T14:28:46.700 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T14:28:46.700 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:46.700 INFO:teuthology.orchestra.run.vm03.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:46.700 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:46.700 INFO:teuthology.orchestra.run.vm03.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:46.700 INFO:teuthology.orchestra.run.vm03.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:46.700 INFO:teuthology.orchestra.run.vm03.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:46.700 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:46.700 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:46.700 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:46.700 INFO:teuthology.orchestra.run.vm03.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:46.701 INFO:teuthology.orchestra.run.vm03.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:46.701 INFO:teuthology.orchestra.run.vm03.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T14:28:46.701 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:46.716 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T14:28:46.716 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:46.730 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T14:28:46.740 INFO:teuthology.orchestra.run.vm05.stdout:Building dependency tree... 2026-03-09T14:28:46.740 INFO:teuthology.orchestra.run.vm05.stdout:Reading state information... 2026-03-09T14:28:46.747 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T14:28:46.850 INFO:teuthology.orchestra.run.vm05.stdout:Package 'python3-cephfs' is not installed, so not removed 2026-03-09T14:28:46.850 INFO:teuthology.orchestra.run.vm05.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:46.850 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:46.850 INFO:teuthology.orchestra.run.vm05.stdout: libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T14:28:46.850 INFO:teuthology.orchestra.run.vm05.stdout: librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph lua-any lua-sec 2026-03-09T14:28:46.850 INFO:teuthology.orchestra.run.vm05.stdout: lua-socket lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T14:28:46.850 INFO:teuthology.orchestra.run.vm05.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T14:28:46.850 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:46.850 INFO:teuthology.orchestra.run.vm05.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:46.850 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:46.850 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:46.850 INFO:teuthology.orchestra.run.vm05.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:46.850 INFO:teuthology.orchestra.run.vm05.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:46.850 INFO:teuthology.orchestra.run.vm05.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:46.850 INFO:teuthology.orchestra.run.vm05.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:46.850 INFO:teuthology.orchestra.run.vm05.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:46.850 INFO:teuthology.orchestra.run.vm05.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:46.850 INFO:teuthology.orchestra.run.vm05.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:46.850 INFO:teuthology.orchestra.run.vm05.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T14:28:46.850 INFO:teuthology.orchestra.run.vm05.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:46.867 INFO:teuthology.orchestra.run.vm05.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T14:28:46.867 INFO:teuthology.orchestra.run.vm05.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:46.897 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-09T14:28:46.906 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-09T14:28:46.907 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-09T14:28:46.927 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T14:28:46.928 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T14:28:47.037 INFO:teuthology.orchestra.run.vm04.stdout:Package 'python3-cephfs' is not installed, so not removed 2026-03-09T14:28:47.037 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:47.037 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:47.037 INFO:teuthology.orchestra.run.vm04.stdout: libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T14:28:47.037 INFO:teuthology.orchestra.run.vm04.stdout: librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph lua-any lua-sec 2026-03-09T14:28:47.037 INFO:teuthology.orchestra.run.vm04.stdout: lua-socket lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T14:28:47.037 INFO:teuthology.orchestra.run.vm04.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T14:28:47.037 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:47.037 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:47.037 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:47.037 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:47.037 INFO:teuthology.orchestra.run.vm04.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:47.037 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:47.037 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:47.037 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:47.037 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:47.037 INFO:teuthology.orchestra.run.vm04.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:47.037 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:47.037 INFO:teuthology.orchestra.run.vm04.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T14:28:47.037 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:47.049 INFO:teuthology.orchestra.run.vm03.stdout:Package 'python3-cephfs' is not installed, so not removed 2026-03-09T14:28:47.049 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:47.049 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:47.049 INFO:teuthology.orchestra.run.vm03.stdout: libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T14:28:47.049 INFO:teuthology.orchestra.run.vm03.stdout: librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph lua-any lua-sec 2026-03-09T14:28:47.049 INFO:teuthology.orchestra.run.vm03.stdout: lua-socket lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T14:28:47.049 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T14:28:47.049 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:47.049 INFO:teuthology.orchestra.run.vm03.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:47.049 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:47.049 INFO:teuthology.orchestra.run.vm03.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:47.049 INFO:teuthology.orchestra.run.vm03.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:47.049 INFO:teuthology.orchestra.run.vm03.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:47.049 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:47.049 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:47.049 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:47.049 INFO:teuthology.orchestra.run.vm03.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:47.049 INFO:teuthology.orchestra.run.vm03.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:47.049 INFO:teuthology.orchestra.run.vm03.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T14:28:47.050 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:47.051 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T14:28:47.051 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:47.071 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T14:28:47.071 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:47.083 INFO:teuthology.orchestra.run.vm05.stdout:Building dependency tree... 2026-03-09T14:28:47.084 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T14:28:47.084 INFO:teuthology.orchestra.run.vm05.stdout:Reading state information... 2026-03-09T14:28:47.102 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T14:28:47.183 INFO:teuthology.orchestra.run.vm05.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:47.183 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:47.183 INFO:teuthology.orchestra.run.vm05.stdout: libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T14:28:47.183 INFO:teuthology.orchestra.run.vm05.stdout: librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph lua-any lua-sec 2026-03-09T14:28:47.183 INFO:teuthology.orchestra.run.vm05.stdout: lua-socket lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T14:28:47.183 INFO:teuthology.orchestra.run.vm05.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T14:28:47.184 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:47.184 INFO:teuthology.orchestra.run.vm05.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:47.184 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:47.184 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:47.184 INFO:teuthology.orchestra.run.vm05.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:47.184 INFO:teuthology.orchestra.run.vm05.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:47.184 INFO:teuthology.orchestra.run.vm05.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:47.184 INFO:teuthology.orchestra.run.vm05.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:47.184 INFO:teuthology.orchestra.run.vm05.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:47.184 INFO:teuthology.orchestra.run.vm05.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:47.184 INFO:teuthology.orchestra.run.vm05.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:47.184 INFO:teuthology.orchestra.run.vm05.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T14:28:47.184 INFO:teuthology.orchestra.run.vm05.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:47.192 INFO:teuthology.orchestra.run.vm05.stdout:The following packages will be REMOVED: 2026-03-09T14:28:47.193 INFO:teuthology.orchestra.run.vm05.stdout: python3-rbd* 2026-03-09T14:28:47.257 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-09T14:28:47.257 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-09T14:28:47.277 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T14:28:47.278 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T14:28:47.350 INFO:teuthology.orchestra.run.vm05.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T14:28:47.350 INFO:teuthology.orchestra.run.vm05.stdout:After this operation, 1186 kB disk space will be freed. 2026-03-09T14:28:47.381 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:47.381 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:47.381 INFO:teuthology.orchestra.run.vm04.stdout: libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T14:28:47.381 INFO:teuthology.orchestra.run.vm04.stdout: librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph lua-any lua-sec 2026-03-09T14:28:47.382 INFO:teuthology.orchestra.run.vm04.stdout: lua-socket lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T14:28:47.382 INFO:teuthology.orchestra.run.vm04.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T14:28:47.382 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:47.382 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:47.382 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:47.382 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:47.382 INFO:teuthology.orchestra.run.vm04.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:47.382 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:47.382 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:47.382 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:47.382 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:47.382 INFO:teuthology.orchestra.run.vm04.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:47.382 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:47.382 INFO:teuthology.orchestra.run.vm04.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T14:28:47.382 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:47.390 INFO:teuthology.orchestra.run.vm05.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117503 files and directories currently installed.) 2026-03-09T14:28:47.392 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:47.395 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-09T14:28:47.396 INFO:teuthology.orchestra.run.vm04.stdout: python3-rbd* 2026-03-09T14:28:47.430 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:47.431 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:47.431 INFO:teuthology.orchestra.run.vm03.stdout: libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T14:28:47.431 INFO:teuthology.orchestra.run.vm03.stdout: librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph lua-any lua-sec 2026-03-09T14:28:47.432 INFO:teuthology.orchestra.run.vm03.stdout: lua-socket lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T14:28:47.432 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T14:28:47.432 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:47.432 INFO:teuthology.orchestra.run.vm03.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:47.432 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:47.432 INFO:teuthology.orchestra.run.vm03.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:47.432 INFO:teuthology.orchestra.run.vm03.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:47.432 INFO:teuthology.orchestra.run.vm03.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:47.432 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:47.432 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:47.432 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:47.432 INFO:teuthology.orchestra.run.vm03.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:47.432 INFO:teuthology.orchestra.run.vm03.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:47.432 INFO:teuthology.orchestra.run.vm03.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T14:28:47.432 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:47.440 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-09T14:28:47.440 INFO:teuthology.orchestra.run.vm03.stdout: python3-rbd* 2026-03-09T14:28:47.562 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T14:28:47.562 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 1186 kB disk space will be freed. 2026-03-09T14:28:47.597 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T14:28:47.597 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 1186 kB disk space will be freed. 2026-03-09T14:28:47.597 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117503 files and directories currently installed.) 2026-03-09T14:28:47.599 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:47.632 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117503 files and directories currently installed.) 2026-03-09T14:28:47.634 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:48.418 INFO:teuthology.orchestra.run.vm05.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:48.449 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-09T14:28:48.611 INFO:teuthology.orchestra.run.vm05.stdout:Building dependency tree... 2026-03-09T14:28:48.612 INFO:teuthology.orchestra.run.vm05.stdout:Reading state information... 2026-03-09T14:28:48.622 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:48.635 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:48.655 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T14:28:48.668 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T14:28:48.740 INFO:teuthology.orchestra.run.vm05.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:48.740 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:48.740 INFO:teuthology.orchestra.run.vm05.stdout: libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T14:28:48.740 INFO:teuthology.orchestra.run.vm05.stdout: librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph lua-any lua-sec 2026-03-09T14:28:48.740 INFO:teuthology.orchestra.run.vm05.stdout: lua-socket lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T14:28:48.740 INFO:teuthology.orchestra.run.vm05.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T14:28:48.740 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:48.740 INFO:teuthology.orchestra.run.vm05.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:48.740 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:48.740 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:48.740 INFO:teuthology.orchestra.run.vm05.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:48.740 INFO:teuthology.orchestra.run.vm05.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:48.740 INFO:teuthology.orchestra.run.vm05.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:48.740 INFO:teuthology.orchestra.run.vm05.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:48.740 INFO:teuthology.orchestra.run.vm05.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:48.740 INFO:teuthology.orchestra.run.vm05.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:48.740 INFO:teuthology.orchestra.run.vm05.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:48.740 INFO:teuthology.orchestra.run.vm05.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T14:28:48.740 INFO:teuthology.orchestra.run.vm05.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:48.749 INFO:teuthology.orchestra.run.vm05.stdout:The following packages will be REMOVED: 2026-03-09T14:28:48.749 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs-dev* libcephfs2* 2026-03-09T14:28:48.834 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-09T14:28:48.834 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-09T14:28:48.849 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T14:28:48.849 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T14:28:48.910 INFO:teuthology.orchestra.run.vm05.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-09T14:28:48.911 INFO:teuthology.orchestra.run.vm05.stdout:After this operation, 3202 kB disk space will be freed. 2026-03-09T14:28:48.948 INFO:teuthology.orchestra.run.vm05.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117495 files and directories currently installed.) 2026-03-09T14:28:48.950 INFO:teuthology.orchestra.run.vm05.stdout:Removing libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:48.962 INFO:teuthology.orchestra.run.vm05.stdout:Removing libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:48.974 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:48.975 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:48.975 INFO:teuthology.orchestra.run.vm04.stdout: libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T14:28:48.975 INFO:teuthology.orchestra.run.vm04.stdout: librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph lua-any lua-sec 2026-03-09T14:28:48.975 INFO:teuthology.orchestra.run.vm04.stdout: lua-socket lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T14:28:48.975 INFO:teuthology.orchestra.run.vm04.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T14:28:48.975 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:48.975 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:48.975 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:48.975 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:48.975 INFO:teuthology.orchestra.run.vm04.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:48.975 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:48.975 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:48.975 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:48.975 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:48.975 INFO:teuthology.orchestra.run.vm04.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:48.976 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:48.976 INFO:teuthology.orchestra.run.vm04.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T14:28:48.976 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:48.986 INFO:teuthology.orchestra.run.vm05.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T14:28:48.989 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-09T14:28:48.990 INFO:teuthology.orchestra.run.vm04.stdout: libcephfs-dev* libcephfs2* 2026-03-09T14:28:48.994 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:48.994 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:48.994 INFO:teuthology.orchestra.run.vm03.stdout: libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T14:28:48.994 INFO:teuthology.orchestra.run.vm03.stdout: librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph lua-any lua-sec 2026-03-09T14:28:48.994 INFO:teuthology.orchestra.run.vm03.stdout: lua-socket lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T14:28:48.994 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T14:28:48.994 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:48.995 INFO:teuthology.orchestra.run.vm03.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:48.995 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:48.995 INFO:teuthology.orchestra.run.vm03.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:48.995 INFO:teuthology.orchestra.run.vm03.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:48.995 INFO:teuthology.orchestra.run.vm03.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:48.995 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:48.995 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:48.995 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:48.995 INFO:teuthology.orchestra.run.vm03.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:48.995 INFO:teuthology.orchestra.run.vm03.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:48.995 INFO:teuthology.orchestra.run.vm03.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T14:28:48.995 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:49.007 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-09T14:28:49.007 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs-dev* libcephfs2* 2026-03-09T14:28:49.157 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-09T14:28:49.157 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 3202 kB disk space will be freed. 2026-03-09T14:28:49.163 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-09T14:28:49.163 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 3202 kB disk space will be freed. 2026-03-09T14:28:49.192 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117495 files and directories currently installed.) 2026-03-09T14:28:49.194 INFO:teuthology.orchestra.run.vm04.stdout:Removing libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:49.199 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117495 files and directories currently installed.) 2026-03-09T14:28:49.201 INFO:teuthology.orchestra.run.vm03.stdout:Removing libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:49.205 INFO:teuthology.orchestra.run.vm04.stdout:Removing libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:49.213 INFO:teuthology.orchestra.run.vm03.stdout:Removing libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:49.228 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T14:28:49.238 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T14:28:50.002 INFO:teuthology.orchestra.run.vm05.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:50.033 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-09T14:28:50.204 INFO:teuthology.orchestra.run.vm05.stdout:Building dependency tree... 2026-03-09T14:28:50.204 INFO:teuthology.orchestra.run.vm05.stdout:Reading state information... 2026-03-09T14:28:50.226 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:50.259 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T14:28:50.268 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:50.301 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T14:28:50.343 INFO:teuthology.orchestra.run.vm05.stdout:Package 'libcephfs-dev' is not installed, so not removed 2026-03-09T14:28:50.343 INFO:teuthology.orchestra.run.vm05.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:50.343 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:50.343 INFO:teuthology.orchestra.run.vm05.stdout: libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T14:28:50.343 INFO:teuthology.orchestra.run.vm05.stdout: librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph lua-any lua-sec 2026-03-09T14:28:50.343 INFO:teuthology.orchestra.run.vm05.stdout: lua-socket lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T14:28:50.343 INFO:teuthology.orchestra.run.vm05.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T14:28:50.343 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:50.343 INFO:teuthology.orchestra.run.vm05.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:50.343 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:50.343 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:50.343 INFO:teuthology.orchestra.run.vm05.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:50.343 INFO:teuthology.orchestra.run.vm05.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:50.343 INFO:teuthology.orchestra.run.vm05.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:50.343 INFO:teuthology.orchestra.run.vm05.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:50.343 INFO:teuthology.orchestra.run.vm05.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:50.343 INFO:teuthology.orchestra.run.vm05.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:50.343 INFO:teuthology.orchestra.run.vm05.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:50.344 INFO:teuthology.orchestra.run.vm05.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T14:28:50.344 INFO:teuthology.orchestra.run.vm05.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:50.359 INFO:teuthology.orchestra.run.vm05.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T14:28:50.359 INFO:teuthology.orchestra.run.vm05.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:50.390 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-09T14:28:50.435 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-09T14:28:50.436 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-09T14:28:50.493 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T14:28:50.493 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T14:28:50.547 INFO:teuthology.orchestra.run.vm04.stdout:Package 'libcephfs-dev' is not installed, so not removed 2026-03-09T14:28:50.547 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:50.547 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:50.547 INFO:teuthology.orchestra.run.vm04.stdout: libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T14:28:50.547 INFO:teuthology.orchestra.run.vm04.stdout: librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph lua-any lua-sec 2026-03-09T14:28:50.547 INFO:teuthology.orchestra.run.vm04.stdout: lua-socket lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T14:28:50.547 INFO:teuthology.orchestra.run.vm04.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T14:28:50.548 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:50.548 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:50.548 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:50.548 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:50.548 INFO:teuthology.orchestra.run.vm04.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:50.548 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:50.548 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:50.548 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:50.548 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:50.548 INFO:teuthology.orchestra.run.vm04.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:50.548 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:50.548 INFO:teuthology.orchestra.run.vm04.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T14:28:50.548 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:50.567 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T14:28:50.567 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:50.579 INFO:teuthology.orchestra.run.vm05.stdout:Building dependency tree... 2026-03-09T14:28:50.579 INFO:teuthology.orchestra.run.vm05.stdout:Reading state information... 2026-03-09T14:28:50.599 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T14:28:50.644 INFO:teuthology.orchestra.run.vm03.stdout:Package 'libcephfs-dev' is not installed, so not removed 2026-03-09T14:28:50.644 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:50.644 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:50.644 INFO:teuthology.orchestra.run.vm03.stdout: libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T14:28:50.644 INFO:teuthology.orchestra.run.vm03.stdout: librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph lua-any lua-sec 2026-03-09T14:28:50.645 INFO:teuthology.orchestra.run.vm03.stdout: lua-socket lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T14:28:50.645 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T14:28:50.645 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:50.645 INFO:teuthology.orchestra.run.vm03.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:50.645 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:50.645 INFO:teuthology.orchestra.run.vm03.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:50.645 INFO:teuthology.orchestra.run.vm03.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:50.645 INFO:teuthology.orchestra.run.vm03.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:50.645 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:50.645 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:50.645 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:50.645 INFO:teuthology.orchestra.run.vm03.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:50.645 INFO:teuthology.orchestra.run.vm03.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:50.645 INFO:teuthology.orchestra.run.vm03.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T14:28:50.645 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:50.665 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T14:28:50.666 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:50.698 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T14:28:50.753 INFO:teuthology.orchestra.run.vm05.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:50.754 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:50.754 INFO:teuthology.orchestra.run.vm05.stdout: libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 libgfxdr0 libglusterfs0 2026-03-09T14:28:50.754 INFO:teuthology.orchestra.run.vm05.stdout: libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 liboath0 libonig5 2026-03-09T14:28:50.754 INFO:teuthology.orchestra.run.vm05.stdout: libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 libqt5network5 2026-03-09T14:28:50.754 INFO:teuthology.orchestra.run.vm05.stdout: librdkafka1 libreadline-dev libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-09T14:28:50.754 INFO:teuthology.orchestra.run.vm05.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T14:28:50.754 INFO:teuthology.orchestra.run.vm05.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T14:28:50.754 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:50.754 INFO:teuthology.orchestra.run.vm05.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:50.754 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:50.754 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:50.754 INFO:teuthology.orchestra.run.vm05.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:50.754 INFO:teuthology.orchestra.run.vm05.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:50.754 INFO:teuthology.orchestra.run.vm05.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:50.754 INFO:teuthology.orchestra.run.vm05.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:50.754 INFO:teuthology.orchestra.run.vm05.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:50.754 INFO:teuthology.orchestra.run.vm05.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:50.754 INFO:teuthology.orchestra.run.vm05.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:50.754 INFO:teuthology.orchestra.run.vm05.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-09T14:28:50.754 INFO:teuthology.orchestra.run.vm05.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:50.763 INFO:teuthology.orchestra.run.vm05.stdout:The following packages will be REMOVED: 2026-03-09T14:28:50.763 INFO:teuthology.orchestra.run.vm05.stdout: librados2* libradosstriper1* librbd1* librgw2* libsqlite3-mod-ceph* 2026-03-09T14:28:50.763 INFO:teuthology.orchestra.run.vm05.stdout: qemu-block-extra* rbd-fuse* 2026-03-09T14:28:50.797 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-09T14:28:50.798 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-09T14:28:50.887 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T14:28:50.888 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T14:28:50.921 INFO:teuthology.orchestra.run.vm05.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-09T14:28:50.921 INFO:teuthology.orchestra.run.vm05.stdout:After this operation, 51.6 MB disk space will be freed. 2026-03-09T14:28:50.928 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:50.928 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:50.928 INFO:teuthology.orchestra.run.vm04.stdout: libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 libgfxdr0 libglusterfs0 2026-03-09T14:28:50.928 INFO:teuthology.orchestra.run.vm04.stdout: libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 liboath0 libonig5 2026-03-09T14:28:50.928 INFO:teuthology.orchestra.run.vm04.stdout: libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 libqt5network5 2026-03-09T14:28:50.929 INFO:teuthology.orchestra.run.vm04.stdout: librdkafka1 libreadline-dev libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-09T14:28:50.929 INFO:teuthology.orchestra.run.vm04.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T14:28:50.929 INFO:teuthology.orchestra.run.vm04.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T14:28:50.929 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:50.929 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:50.929 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:50.929 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:50.929 INFO:teuthology.orchestra.run.vm04.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:50.929 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:50.929 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:50.929 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:50.929 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:50.929 INFO:teuthology.orchestra.run.vm04.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:50.929 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:50.929 INFO:teuthology.orchestra.run.vm04.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-09T14:28:50.929 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:50.941 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-09T14:28:50.941 INFO:teuthology.orchestra.run.vm04.stdout: librados2* libradosstriper1* librbd1* librgw2* libsqlite3-mod-ceph* 2026-03-09T14:28:50.941 INFO:teuthology.orchestra.run.vm04.stdout: qemu-block-extra* rbd-fuse* 2026-03-09T14:28:50.960 INFO:teuthology.orchestra.run.vm05.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117480 files and directories currently installed.) 2026-03-09T14:28:50.963 INFO:teuthology.orchestra.run.vm05.stdout:Removing rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:50.975 INFO:teuthology.orchestra.run.vm05.stdout:Removing libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:50.987 INFO:teuthology.orchestra.run.vm05.stdout:Removing libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:50.997 INFO:teuthology.orchestra.run.vm05.stdout:Removing qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-09T14:28:51.042 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:51.042 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:51.042 INFO:teuthology.orchestra.run.vm03.stdout: libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 libgfxdr0 libglusterfs0 2026-03-09T14:28:51.042 INFO:teuthology.orchestra.run.vm03.stdout: libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 liboath0 libonig5 2026-03-09T14:28:51.042 INFO:teuthology.orchestra.run.vm03.stdout: libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 libqt5network5 2026-03-09T14:28:51.043 INFO:teuthology.orchestra.run.vm03.stdout: librdkafka1 libreadline-dev libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-09T14:28:51.043 INFO:teuthology.orchestra.run.vm03.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T14:28:51.043 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T14:28:51.043 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:51.043 INFO:teuthology.orchestra.run.vm03.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:51.043 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:51.043 INFO:teuthology.orchestra.run.vm03.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:51.043 INFO:teuthology.orchestra.run.vm03.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:51.043 INFO:teuthology.orchestra.run.vm03.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:51.043 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:51.043 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:51.043 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:51.043 INFO:teuthology.orchestra.run.vm03.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:51.043 INFO:teuthology.orchestra.run.vm03.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:51.043 INFO:teuthology.orchestra.run.vm03.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-09T14:28:51.043 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:51.053 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-09T14:28:51.053 INFO:teuthology.orchestra.run.vm03.stdout: librados2* libradosstriper1* librbd1* librgw2* libsqlite3-mod-ceph* 2026-03-09T14:28:51.054 INFO:teuthology.orchestra.run.vm03.stdout: qemu-block-extra* rbd-fuse* 2026-03-09T14:28:51.106 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-09T14:28:51.106 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 51.6 MB disk space will be freed. 2026-03-09T14:28:51.142 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117480 files and directories currently installed.) 2026-03-09T14:28:51.144 INFO:teuthology.orchestra.run.vm04.stdout:Removing rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:51.156 INFO:teuthology.orchestra.run.vm04.stdout:Removing libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:51.167 INFO:teuthology.orchestra.run.vm04.stdout:Removing libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:51.178 INFO:teuthology.orchestra.run.vm04.stdout:Removing qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-09T14:28:51.208 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-09T14:28:51.208 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 51.6 MB disk space will be freed. 2026-03-09T14:28:51.241 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117480 files and directories currently installed.) 2026-03-09T14:28:51.243 INFO:teuthology.orchestra.run.vm03.stdout:Removing rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:51.255 INFO:teuthology.orchestra.run.vm03.stdout:Removing libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:51.267 INFO:teuthology.orchestra.run.vm03.stdout:Removing libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:51.277 INFO:teuthology.orchestra.run.vm03.stdout:Removing qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-09T14:28:51.353 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:51 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:51.353 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:51 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:51.353 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:51 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:51.353 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:51 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:51.354 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:51 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:51.434 INFO:teuthology.orchestra.run.vm05.stdout:Removing librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:51.446 INFO:teuthology.orchestra.run.vm05.stdout:Removing librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:51.459 INFO:teuthology.orchestra.run.vm05.stdout:Removing librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:51.485 INFO:teuthology.orchestra.run.vm05.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T14:28:51.504 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:51 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:51.505 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:51 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:51.505 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:51 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:51.505 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:51 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:51.532 INFO:teuthology.orchestra.run.vm05.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T14:28:51.550 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:51 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:51.550 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:51 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:51.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:51 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:51.551 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:51 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:51.551 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:51 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:51.597 INFO:teuthology.orchestra.run.vm05.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117429 files and directories currently installed.) 2026-03-09T14:28:51.599 INFO:teuthology.orchestra.run.vm05.stdout:Purging configuration files for qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-09T14:28:51.615 INFO:teuthology.orchestra.run.vm04.stdout:Removing librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:51.627 INFO:teuthology.orchestra.run.vm04.stdout:Removing librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:51.627 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:51 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:51.627 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:51 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:51.627 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:51 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:51.627 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:51 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:51.627 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:51 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:51.639 INFO:teuthology.orchestra.run.vm04.stdout:Removing librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:51.662 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T14:28:51.676 INFO:teuthology.orchestra.run.vm03.stdout:Removing librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:51.687 INFO:teuthology.orchestra.run.vm03.stdout:Removing librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:51.702 INFO:teuthology.orchestra.run.vm03.stdout:Removing librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:51.707 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T14:28:51.726 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T14:28:51.770 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117429 files and directories currently installed.) 2026-03-09T14:28:51.772 INFO:teuthology.orchestra.run.vm04.stdout:Purging configuration files for qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-09T14:28:51.777 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T14:28:51.799 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:51 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:51.799 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:51 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:51.800 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:51 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:51.800 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:51 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:51.841 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117429 files and directories currently installed.) 2026-03-09T14:28:51.843 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-09T14:28:51.868 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:51 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:51.868 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:51 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:51.868 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:51 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:51.869 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:51 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:51.869 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:51 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:51.964 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:51 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:51.965 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:51 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:51.965 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:51 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:51.965 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:51 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:51.965 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:51 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:52.108 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:51 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:52.108 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:51 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:52.108 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:51 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:52.109 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:51 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:52.163 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:51 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:52.163 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:51 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:52.163 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:51 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:52.163 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:51 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:52.163 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:51 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:52.255 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:51 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:52.255 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:51 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:52.255 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:51 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:52.255 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:51 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:52.255 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:51 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:52.505 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:52 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:52.505 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:52 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:52.505 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:52 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:52.505 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:52 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:52.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:52 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:52.551 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:52 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:52.551 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:52 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:52.551 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:52 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:52.551 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:52 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:53.092 INFO:teuthology.orchestra.run.vm05.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:53.125 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-09T14:28:53.233 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:53.267 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T14:28:53.292 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:53.305 INFO:teuthology.orchestra.run.vm05.stdout:Building dependency tree... 2026-03-09T14:28:53.306 INFO:teuthology.orchestra.run.vm05.stdout:Reading state information... 2026-03-09T14:28:53.325 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T14:28:53.381 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-09T14:28:53.382 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-09T14:28:53.436 INFO:teuthology.orchestra.run.vm05.stdout:Package 'librbd1' is not installed, so not removed 2026-03-09T14:28:53.436 INFO:teuthology.orchestra.run.vm05.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:53.436 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:53.436 INFO:teuthology.orchestra.run.vm05.stdout: libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 libgfxdr0 libglusterfs0 2026-03-09T14:28:53.436 INFO:teuthology.orchestra.run.vm05.stdout: libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 liboath0 libonig5 2026-03-09T14:28:53.436 INFO:teuthology.orchestra.run.vm05.stdout: libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 libqt5network5 2026-03-09T14:28:53.436 INFO:teuthology.orchestra.run.vm05.stdout: librdkafka1 libreadline-dev libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-09T14:28:53.436 INFO:teuthology.orchestra.run.vm05.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T14:28:53.436 INFO:teuthology.orchestra.run.vm05.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T14:28:53.436 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:53.436 INFO:teuthology.orchestra.run.vm05.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:53.436 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:53.436 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:53.436 INFO:teuthology.orchestra.run.vm05.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:53.436 INFO:teuthology.orchestra.run.vm05.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:53.437 INFO:teuthology.orchestra.run.vm05.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:53.437 INFO:teuthology.orchestra.run.vm05.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:53.437 INFO:teuthology.orchestra.run.vm05.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:53.437 INFO:teuthology.orchestra.run.vm05.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:53.437 INFO:teuthology.orchestra.run.vm05.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:53.437 INFO:teuthology.orchestra.run.vm05.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-09T14:28:53.437 INFO:teuthology.orchestra.run.vm05.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:53.453 INFO:teuthology.orchestra.run.vm05.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T14:28:53.453 INFO:teuthology.orchestra.run.vm05.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:53.485 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-09T14:28:53.494 INFO:teuthology.orchestra.run.vm04.stdout:Package 'librbd1' is not installed, so not removed 2026-03-09T14:28:53.494 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:53.494 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:53.494 INFO:teuthology.orchestra.run.vm04.stdout: libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 libgfxdr0 libglusterfs0 2026-03-09T14:28:53.494 INFO:teuthology.orchestra.run.vm04.stdout: libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 liboath0 libonig5 2026-03-09T14:28:53.494 INFO:teuthology.orchestra.run.vm04.stdout: libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 libqt5network5 2026-03-09T14:28:53.494 INFO:teuthology.orchestra.run.vm04.stdout: librdkafka1 libreadline-dev libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-09T14:28:53.494 INFO:teuthology.orchestra.run.vm04.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T14:28:53.494 INFO:teuthology.orchestra.run.vm04.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T14:28:53.495 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:53.495 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:53.495 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:53.495 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:53.495 INFO:teuthology.orchestra.run.vm04.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:53.495 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:53.495 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:53.495 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:53.495 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:53.495 INFO:teuthology.orchestra.run.vm04.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:53.495 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:53.495 INFO:teuthology.orchestra.run.vm04.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-09T14:28:53.495 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:53.515 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T14:28:53.515 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:53.522 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T14:28:53.523 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T14:28:53.547 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T14:28:53.632 INFO:teuthology.orchestra.run.vm03.stdout:Package 'librbd1' is not installed, so not removed 2026-03-09T14:28:53.632 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:53.632 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:53.632 INFO:teuthology.orchestra.run.vm03.stdout: libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 libgfxdr0 libglusterfs0 2026-03-09T14:28:53.632 INFO:teuthology.orchestra.run.vm03.stdout: libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 liboath0 libonig5 2026-03-09T14:28:53.632 INFO:teuthology.orchestra.run.vm03.stdout: libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 libqt5network5 2026-03-09T14:28:53.632 INFO:teuthology.orchestra.run.vm03.stdout: librdkafka1 libreadline-dev libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-09T14:28:53.632 INFO:teuthology.orchestra.run.vm03.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T14:28:53.632 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T14:28:53.632 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:53.632 INFO:teuthology.orchestra.run.vm03.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:53.632 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:53.632 INFO:teuthology.orchestra.run.vm03.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:53.632 INFO:teuthology.orchestra.run.vm03.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:53.632 INFO:teuthology.orchestra.run.vm03.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:53.632 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:53.632 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:53.632 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:53.632 INFO:teuthology.orchestra.run.vm03.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:53.632 INFO:teuthology.orchestra.run.vm03.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:53.632 INFO:teuthology.orchestra.run.vm03.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-09T14:28:53.632 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:53.648 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T14:28:53.648 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:53.675 INFO:teuthology.orchestra.run.vm05.stdout:Building dependency tree... 2026-03-09T14:28:53.676 INFO:teuthology.orchestra.run.vm05.stdout:Reading state information... 2026-03-09T14:28:53.680 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T14:28:53.740 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-09T14:28:53.740 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-09T14:28:53.794 INFO:teuthology.orchestra.run.vm05.stdout:Package 'rbd-fuse' is not installed, so not removed 2026-03-09T14:28:53.794 INFO:teuthology.orchestra.run.vm05.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:53.794 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:53.794 INFO:teuthology.orchestra.run.vm05.stdout: libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 libgfxdr0 libglusterfs0 2026-03-09T14:28:53.794 INFO:teuthology.orchestra.run.vm05.stdout: libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 liboath0 libonig5 2026-03-09T14:28:53.794 INFO:teuthology.orchestra.run.vm05.stdout: libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 libqt5network5 2026-03-09T14:28:53.794 INFO:teuthology.orchestra.run.vm05.stdout: librdkafka1 libreadline-dev libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-09T14:28:53.794 INFO:teuthology.orchestra.run.vm05.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T14:28:53.794 INFO:teuthology.orchestra.run.vm05.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T14:28:53.794 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:53.794 INFO:teuthology.orchestra.run.vm05.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:53.794 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:53.794 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:53.794 INFO:teuthology.orchestra.run.vm05.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:53.794 INFO:teuthology.orchestra.run.vm05.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:53.794 INFO:teuthology.orchestra.run.vm05.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:53.794 INFO:teuthology.orchestra.run.vm05.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:53.794 INFO:teuthology.orchestra.run.vm05.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:53.794 INFO:teuthology.orchestra.run.vm05.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:53.794 INFO:teuthology.orchestra.run.vm05.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:53.794 INFO:teuthology.orchestra.run.vm05.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-09T14:28:53.794 INFO:teuthology.orchestra.run.vm05.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:53.797 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T14:28:53.798 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T14:28:53.812 INFO:teuthology.orchestra.run.vm05.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T14:28:53.812 INFO:teuthology.orchestra.run.vm05.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:53.813 DEBUG:teuthology.orchestra.run.vm05:> dpkg -l | grep '^.\(U\|H\)R' | awk '{print $2}' | sudo xargs --no-run-if-empty dpkg -P --force-remove-reinstreq 2026-03-09T14:28:53.867 DEBUG:teuthology.orchestra.run.vm05:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" autoremove 2026-03-09T14:28:53.873 INFO:teuthology.orchestra.run.vm04.stdout:Package 'rbd-fuse' is not installed, so not removed 2026-03-09T14:28:53.873 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:53.873 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:53.873 INFO:teuthology.orchestra.run.vm04.stdout: libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 libgfxdr0 libglusterfs0 2026-03-09T14:28:53.873 INFO:teuthology.orchestra.run.vm04.stdout: libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 liboath0 libonig5 2026-03-09T14:28:53.873 INFO:teuthology.orchestra.run.vm04.stdout: libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 libqt5network5 2026-03-09T14:28:53.873 INFO:teuthology.orchestra.run.vm04.stdout: librdkafka1 libreadline-dev libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-09T14:28:53.873 INFO:teuthology.orchestra.run.vm04.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T14:28:53.873 INFO:teuthology.orchestra.run.vm04.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T14:28:53.873 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:53.873 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:53.873 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:53.873 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:53.873 INFO:teuthology.orchestra.run.vm04.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:53.873 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:53.873 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:53.873 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:53.873 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:53.873 INFO:teuthology.orchestra.run.vm04.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:53.873 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:53.873 INFO:teuthology.orchestra.run.vm04.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-09T14:28:53.873 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:53.887 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T14:28:53.887 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:53.888 DEBUG:teuthology.orchestra.run.vm04:> dpkg -l | grep '^.\(U\|H\)R' | awk '{print $2}' | sudo xargs --no-run-if-empty dpkg -P --force-remove-reinstreq 2026-03-09T14:28:53.895 INFO:teuthology.orchestra.run.vm03.stdout:Package 'rbd-fuse' is not installed, so not removed 2026-03-09T14:28:53.895 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T14:28:53.895 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:53.895 INFO:teuthology.orchestra.run.vm03.stdout: libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 libgfxdr0 libglusterfs0 2026-03-09T14:28:53.895 INFO:teuthology.orchestra.run.vm03.stdout: libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 liboath0 libonig5 2026-03-09T14:28:53.895 INFO:teuthology.orchestra.run.vm03.stdout: libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 libqt5network5 2026-03-09T14:28:53.895 INFO:teuthology.orchestra.run.vm03.stdout: librdkafka1 libreadline-dev libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-09T14:28:53.895 INFO:teuthology.orchestra.run.vm03.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T14:28:53.895 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T14:28:53.895 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:53.895 INFO:teuthology.orchestra.run.vm03.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:53.895 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:53.895 INFO:teuthology.orchestra.run.vm03.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:53.895 INFO:teuthology.orchestra.run.vm03.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:53.895 INFO:teuthology.orchestra.run.vm03.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:53.895 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:53.895 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:53.895 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:53.895 INFO:teuthology.orchestra.run.vm03.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:53.895 INFO:teuthology.orchestra.run.vm03.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:53.896 INFO:teuthology.orchestra.run.vm03.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-09T14:28:53.896 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T14:28:53.911 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T14:28:53.911 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:53.913 DEBUG:teuthology.orchestra.run.vm03:> dpkg -l | grep '^.\(U\|H\)R' | awk '{print $2}' | sudo xargs --no-run-if-empty dpkg -P --force-remove-reinstreq 2026-03-09T14:28:53.943 DEBUG:teuthology.orchestra.run.vm04:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" autoremove 2026-03-09T14:28:53.944 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-09T14:28:53.967 DEBUG:teuthology.orchestra.run.vm03:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" autoremove 2026-03-09T14:28:54.015 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T14:28:54.043 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T14:28:54.135 INFO:teuthology.orchestra.run.vm05.stdout:Building dependency tree... 2026-03-09T14:28:54.136 INFO:teuthology.orchestra.run.vm05.stdout:Reading state information... 2026-03-09T14:28:54.214 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-09T14:28:54.215 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-09T14:28:54.233 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T14:28:54.234 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T14:28:54.301 INFO:teuthology.orchestra.run.vm05.stdout:The following packages will be REMOVED: 2026-03-09T14:28:54.301 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:54.301 INFO:teuthology.orchestra.run.vm05.stdout: libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 libgfxdr0 libglusterfs0 2026-03-09T14:28:54.301 INFO:teuthology.orchestra.run.vm05.stdout: libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 liboath0 libonig5 2026-03-09T14:28:54.301 INFO:teuthology.orchestra.run.vm05.stdout: libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 libqt5network5 2026-03-09T14:28:54.302 INFO:teuthology.orchestra.run.vm05.stdout: librdkafka1 libreadline-dev libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-09T14:28:54.302 INFO:teuthology.orchestra.run.vm05.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T14:28:54.302 INFO:teuthology.orchestra.run.vm05.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T14:28:54.302 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:54.302 INFO:teuthology.orchestra.run.vm05.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:54.302 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:54.302 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:54.302 INFO:teuthology.orchestra.run.vm05.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:54.302 INFO:teuthology.orchestra.run.vm05.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:54.302 INFO:teuthology.orchestra.run.vm05.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:54.302 INFO:teuthology.orchestra.run.vm05.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:54.302 INFO:teuthology.orchestra.run.vm05.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:54.302 INFO:teuthology.orchestra.run.vm05.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:54.302 INFO:teuthology.orchestra.run.vm05.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:54.302 INFO:teuthology.orchestra.run.vm05.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-09T14:28:54.376 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-09T14:28:54.376 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:54.376 INFO:teuthology.orchestra.run.vm04.stdout: libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 libgfxdr0 libglusterfs0 2026-03-09T14:28:54.376 INFO:teuthology.orchestra.run.vm04.stdout: libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 liboath0 libonig5 2026-03-09T14:28:54.376 INFO:teuthology.orchestra.run.vm04.stdout: libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 libqt5network5 2026-03-09T14:28:54.377 INFO:teuthology.orchestra.run.vm04.stdout: librdkafka1 libreadline-dev libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-09T14:28:54.377 INFO:teuthology.orchestra.run.vm04.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T14:28:54.377 INFO:teuthology.orchestra.run.vm04.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T14:28:54.377 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:54.377 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:54.377 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:54.377 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:54.377 INFO:teuthology.orchestra.run.vm04.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:54.377 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:54.377 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:54.377 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:54.377 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:54.377 INFO:teuthology.orchestra.run.vm04.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:54.377 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:54.377 INFO:teuthology.orchestra.run.vm04.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-09T14:28:54.389 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-09T14:28:54.389 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-09T14:28:54.389 INFO:teuthology.orchestra.run.vm03.stdout: libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 libgfxdr0 libglusterfs0 2026-03-09T14:28:54.389 INFO:teuthology.orchestra.run.vm03.stdout: libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 liboath0 libonig5 2026-03-09T14:28:54.389 INFO:teuthology.orchestra.run.vm03.stdout: libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 libqt5network5 2026-03-09T14:28:54.389 INFO:teuthology.orchestra.run.vm03.stdout: librdkafka1 libreadline-dev libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-09T14:28:54.389 INFO:teuthology.orchestra.run.vm03.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T14:28:54.389 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T14:28:54.389 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-09T14:28:54.390 INFO:teuthology.orchestra.run.vm03.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-09T14:28:54.390 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-09T14:28:54.390 INFO:teuthology.orchestra.run.vm03.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-09T14:28:54.390 INFO:teuthology.orchestra.run.vm03.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-09T14:28:54.390 INFO:teuthology.orchestra.run.vm03.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-09T14:28:54.390 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T14:28:54.390 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T14:28:54.390 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T14:28:54.390 INFO:teuthology.orchestra.run.vm03.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-09T14:28:54.390 INFO:teuthology.orchestra.run.vm03.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T14:28:54.390 INFO:teuthology.orchestra.run.vm03.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-09T14:28:54.468 INFO:teuthology.orchestra.run.vm05.stdout:0 upgraded, 0 newly installed, 83 to remove and 10 not upgraded. 2026-03-09T14:28:54.469 INFO:teuthology.orchestra.run.vm05.stdout:After this operation, 103 MB disk space will be freed. 2026-03-09T14:28:54.502 INFO:teuthology.orchestra.run.vm05.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117429 files and directories currently installed.) 2026-03-09T14:28:54.504 INFO:teuthology.orchestra.run.vm05.stdout:Removing ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:54.520 INFO:teuthology.orchestra.run.vm05.stdout:Removing jq (1.6-2.1ubuntu3.1) ... 2026-03-09T14:28:54.533 INFO:teuthology.orchestra.run.vm05.stdout:Removing libboost-iostreams1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-09T14:28:54.539 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 83 to remove and 10 not upgraded. 2026-03-09T14:28:54.539 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 103 MB disk space will be freed. 2026-03-09T14:28:54.546 INFO:teuthology.orchestra.run.vm05.stdout:Removing libboost-thread1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-09T14:28:54.549 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 83 to remove and 10 not upgraded. 2026-03-09T14:28:54.549 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 103 MB disk space will be freed. 2026-03-09T14:28:54.557 INFO:teuthology.orchestra.run.vm05.stdout:Removing libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T14:28:54.568 INFO:teuthology.orchestra.run.vm05.stdout:Removing libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T14:28:54.575 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117429 files and directories currently installed.) 2026-03-09T14:28:54.577 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:54.580 INFO:teuthology.orchestra.run.vm05.stdout:Removing libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T14:28:54.588 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117429 files and directories currently installed.) 2026-03-09T14:28:54.590 INFO:teuthology.orchestra.run.vm05.stdout:Removing libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T14:28:54.591 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:54.593 INFO:teuthology.orchestra.run.vm04.stdout:Removing jq (1.6-2.1ubuntu3.1) ... 2026-03-09T14:28:54.605 INFO:teuthology.orchestra.run.vm04.stdout:Removing libboost-iostreams1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-09T14:28:54.607 INFO:teuthology.orchestra.run.vm03.stdout:Removing jq (1.6-2.1ubuntu3.1) ... 2026-03-09T14:28:54.610 INFO:teuthology.orchestra.run.vm05.stdout:Removing libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T14:28:54.618 INFO:teuthology.orchestra.run.vm04.stdout:Removing libboost-thread1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-09T14:28:54.619 INFO:teuthology.orchestra.run.vm03.stdout:Removing libboost-iostreams1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-09T14:28:54.621 INFO:teuthology.orchestra.run.vm05.stdout:Removing libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T14:28:54.629 INFO:teuthology.orchestra.run.vm04.stdout:Removing libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T14:28:54.631 INFO:teuthology.orchestra.run.vm03.stdout:Removing libboost-thread1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-09T14:28:54.632 INFO:teuthology.orchestra.run.vm05.stdout:Removing libgfapi0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T14:28:54.640 INFO:teuthology.orchestra.run.vm04.stdout:Removing libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T14:28:54.644 INFO:teuthology.orchestra.run.vm03.stdout:Removing libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T14:28:54.644 INFO:teuthology.orchestra.run.vm05.stdout:Removing libgfrpc0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T14:28:54.651 INFO:teuthology.orchestra.run.vm04.stdout:Removing libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T14:28:54.656 INFO:teuthology.orchestra.run.vm05.stdout:Removing libgfxdr0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T14:28:54.656 INFO:teuthology.orchestra.run.vm03.stdout:Removing libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T14:28:54.662 INFO:teuthology.orchestra.run.vm04.stdout:Removing libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T14:28:54.692 INFO:teuthology.orchestra.run.vm05.stdout:Removing libglusterfs0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T14:28:54.693 INFO:teuthology.orchestra.run.vm03.stdout:Removing libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T14:28:54.703 INFO:teuthology.orchestra.run.vm05.stdout:Removing libiscsi7:amd64 (1.19.0-3build2) ... 2026-03-09T14:28:54.703 INFO:teuthology.orchestra.run.vm04.stdout:Removing libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T14:28:54.704 INFO:teuthology.orchestra.run.vm03.stdout:Removing libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T14:28:54.714 INFO:teuthology.orchestra.run.vm05.stdout:Removing libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T14:28:54.714 INFO:teuthology.orchestra.run.vm04.stdout:Removing libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T14:28:54.724 INFO:teuthology.orchestra.run.vm03.stdout:Removing libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T14:28:54.725 INFO:teuthology.orchestra.run.vm05.stdout:Removing liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T14:28:54.726 INFO:teuthology.orchestra.run.vm04.stdout:Removing libgfapi0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T14:28:54.735 INFO:teuthology.orchestra.run.vm03.stdout:Removing libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T14:28:54.737 INFO:teuthology.orchestra.run.vm05.stdout:Removing luarocks (3.8.0+dfsg1-1) ... 2026-03-09T14:28:54.737 INFO:teuthology.orchestra.run.vm04.stdout:Removing libgfrpc0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T14:28:54.745 INFO:teuthology.orchestra.run.vm03.stdout:Removing libgfapi0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T14:28:54.748 INFO:teuthology.orchestra.run.vm04.stdout:Removing libgfxdr0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T14:28:54.756 INFO:teuthology.orchestra.run.vm03.stdout:Removing libgfrpc0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T14:28:54.759 INFO:teuthology.orchestra.run.vm04.stdout:Removing libglusterfs0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T14:28:54.761 INFO:teuthology.orchestra.run.vm05.stdout:Removing liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T14:28:54.765 INFO:teuthology.orchestra.run.vm03.stdout:Removing libgfxdr0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T14:28:54.771 INFO:teuthology.orchestra.run.vm04.stdout:Removing libiscsi7:amd64 (1.19.0-3build2) ... 2026-03-09T14:28:54.772 INFO:teuthology.orchestra.run.vm05.stdout:Removing libnbd0 (1.10.5-1) ... 2026-03-09T14:28:54.775 INFO:teuthology.orchestra.run.vm03.stdout:Removing libglusterfs0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T14:28:54.782 INFO:teuthology.orchestra.run.vm04.stdout:Removing libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T14:28:54.783 INFO:teuthology.orchestra.run.vm05.stdout:Removing liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T14:28:54.785 INFO:teuthology.orchestra.run.vm03.stdout:Removing libiscsi7:amd64 (1.19.0-3build2) ... 2026-03-09T14:28:54.793 INFO:teuthology.orchestra.run.vm04.stdout:Removing liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T14:28:54.794 INFO:teuthology.orchestra.run.vm05.stdout:Removing libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T14:28:54.795 INFO:teuthology.orchestra.run.vm03.stdout:Removing libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T14:28:54.804 INFO:teuthology.orchestra.run.vm04.stdout:Removing luarocks (3.8.0+dfsg1-1) ... 2026-03-09T14:28:54.804 INFO:teuthology.orchestra.run.vm05.stdout:Removing libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T14:28:54.805 INFO:teuthology.orchestra.run.vm03.stdout:Removing liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T14:28:54.815 INFO:teuthology.orchestra.run.vm03.stdout:Removing luarocks (3.8.0+dfsg1-1) ... 2026-03-09T14:28:54.815 INFO:teuthology.orchestra.run.vm05.stdout:Removing libpmemobj1:amd64 (1.11.1-3build1) ... 2026-03-09T14:28:54.826 INFO:teuthology.orchestra.run.vm05.stdout:Removing librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T14:28:54.828 INFO:teuthology.orchestra.run.vm04.stdout:Removing liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T14:28:54.836 INFO:teuthology.orchestra.run.vm05.stdout:Removing libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T14:28:54.839 INFO:teuthology.orchestra.run.vm03.stdout:Removing liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T14:28:54.840 INFO:teuthology.orchestra.run.vm04.stdout:Removing libnbd0 (1.10.5-1) ... 2026-03-09T14:28:54.847 INFO:teuthology.orchestra.run.vm05.stdout:Removing lua-any (27ubuntu1) ... 2026-03-09T14:28:54.850 INFO:teuthology.orchestra.run.vm03.stdout:Removing libnbd0 (1.10.5-1) ... 2026-03-09T14:28:54.850 INFO:teuthology.orchestra.run.vm04.stdout:Removing liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T14:28:54.857 INFO:teuthology.orchestra.run.vm05.stdout:Removing lua-sec:amd64 (1.0.2-1) ... 2026-03-09T14:28:54.859 INFO:teuthology.orchestra.run.vm03.stdout:Removing liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T14:28:54.861 INFO:teuthology.orchestra.run.vm04.stdout:Removing libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T14:28:54.868 INFO:teuthology.orchestra.run.vm05.stdout:Removing lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T14:28:54.869 INFO:teuthology.orchestra.run.vm03.stdout:Removing libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T14:28:54.871 INFO:teuthology.orchestra.run.vm04.stdout:Removing libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T14:28:54.879 INFO:teuthology.orchestra.run.vm03.stdout:Removing libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T14:28:54.881 INFO:teuthology.orchestra.run.vm05.stdout:Removing lua5.1 (5.1.5-8.1build4) ... 2026-03-09T14:28:54.881 INFO:teuthology.orchestra.run.vm04.stdout:Removing libpmemobj1:amd64 (1.11.1-3build1) ... 2026-03-09T14:28:54.889 INFO:teuthology.orchestra.run.vm03.stdout:Removing libpmemobj1:amd64 (1.11.1-3build1) ... 2026-03-09T14:28:54.892 INFO:teuthology.orchestra.run.vm04.stdout:Removing librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T14:28:54.897 INFO:teuthology.orchestra.run.vm05.stdout:Removing nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T14:28:54.898 INFO:teuthology.orchestra.run.vm03.stdout:Removing librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T14:28:54.902 INFO:teuthology.orchestra.run.vm04.stdout:Removing libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T14:28:54.908 INFO:teuthology.orchestra.run.vm03.stdout:Removing libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T14:28:54.913 INFO:teuthology.orchestra.run.vm04.stdout:Removing lua-any (27ubuntu1) ... 2026-03-09T14:28:54.918 INFO:teuthology.orchestra.run.vm03.stdout:Removing lua-any (27ubuntu1) ... 2026-03-09T14:28:54.923 INFO:teuthology.orchestra.run.vm04.stdout:Removing lua-sec:amd64 (1.0.2-1) ... 2026-03-09T14:28:54.928 INFO:teuthology.orchestra.run.vm03.stdout:Removing lua-sec:amd64 (1.0.2-1) ... 2026-03-09T14:28:54.934 INFO:teuthology.orchestra.run.vm04.stdout:Removing lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T14:28:54.938 INFO:teuthology.orchestra.run.vm03.stdout:Removing lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T14:28:54.947 INFO:teuthology.orchestra.run.vm04.stdout:Removing lua5.1 (5.1.5-8.1build4) ... 2026-03-09T14:28:54.950 INFO:teuthology.orchestra.run.vm03.stdout:Removing lua5.1 (5.1.5-8.1build4) ... 2026-03-09T14:28:54.964 INFO:teuthology.orchestra.run.vm04.stdout:Removing nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T14:28:54.966 INFO:teuthology.orchestra.run.vm03.stdout:Removing nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T14:28:55.246 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:55 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:55.247 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:55 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:55.247 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:55 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:55.247 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:55 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:55.247 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:55 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:55.254 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:55 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:55.255 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:55 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:55.255 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:55 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:55.255 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:55 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:55.300 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:55 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:55.301 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:55 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:55.301 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:55 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:55.301 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:55 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:55.301 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:55 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:55.325 INFO:teuthology.orchestra.run.vm05.stdout:Removing pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T14:28:55.355 INFO:teuthology.orchestra.run.vm05.stdout:Removing python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T14:28:55.360 INFO:teuthology.orchestra.run.vm04.stdout:Removing pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T14:28:55.368 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T14:28:55.379 INFO:teuthology.orchestra.run.vm03.stdout:Removing pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T14:28:55.389 INFO:teuthology.orchestra.run.vm04.stdout:Removing python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T14:28:55.402 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T14:28:55.410 INFO:teuthology.orchestra.run.vm03.stdout:Removing python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T14:28:55.423 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T14:28:55.424 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-webtest (2.0.35-1) ... 2026-03-09T14:28:55.460 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-webtest (2.0.35-1) ... 2026-03-09T14:28:55.475 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-pastescript (2.0.2-4) ... 2026-03-09T14:28:55.486 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-webtest (2.0.35-1) ... 2026-03-09T14:28:55.505 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:55 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:55.505 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:55 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:55.505 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:55 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:55.505 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:55 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:55.505 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:55 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:55.510 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-pastescript (2.0.2-4) ... 2026-03-09T14:28:55.528 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-pastedeploy (2.1.1-1) ... 2026-03-09T14:28:55.536 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-pastescript (2.0.2-4) ... 2026-03-09T14:28:55.562 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-pastedeploy (2.1.1-1) ... 2026-03-09T14:28:55.587 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-pastedeploy (2.1.1-1) ... 2026-03-09T14:28:55.589 INFO:teuthology.orchestra.run.vm05.stdout:Removing python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T14:28:55.599 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T14:28:55.621 INFO:teuthology.orchestra.run.vm04.stdout:Removing python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T14:28:55.631 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T14:28:55.646 INFO:teuthology.orchestra.run.vm03.stdout:Removing python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T14:28:55.652 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T14:28:55.656 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T14:28:55.686 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T14:28:55.713 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T14:28:55.754 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:55 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:55.755 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:55 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:55.755 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:55 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:55.755 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:55 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:55.801 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:55 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:55.801 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:55 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:55.801 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:55 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:55.801 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:55 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:55.801 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:55 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:55.901 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-google-auth (1.5.1-3) ... 2026-03-09T14:28:55.935 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-google-auth (1.5.1-3) ... 2026-03-09T14:28:55.948 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-cachetools (5.0.0-1) ... 2026-03-09T14:28:55.966 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-google-auth (1.5.1-3) ... 2026-03-09T14:28:55.984 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-cachetools (5.0.0-1) ... 2026-03-09T14:28:55.993 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:56.019 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-cachetools (5.0.0-1) ... 2026-03-09T14:28:56.029 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:56.037 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:56.065 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:56.073 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:56.088 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-cherrypy3 (18.6.1-4) ... 2026-03-09T14:28:56.110 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T14:28:56.121 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-cherrypy3 (18.6.1-4) ... 2026-03-09T14:28:56.145 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T14:28:56.157 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-cherrypy3 (18.6.1-4) ... 2026-03-09T14:28:56.177 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T14:28:56.192 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-jaraco.collections (3.4.0-2) ... 2026-03-09T14:28:56.214 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T14:28:56.224 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-jaraco.collections (3.4.0-2) ... 2026-03-09T14:28:56.236 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-jaraco.classes (3.2.1-3) ... 2026-03-09T14:28:56.261 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-jaraco.collections (3.4.0-2) ... 2026-03-09T14:28:56.268 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-jaraco.classes (3.2.1-3) ... 2026-03-09T14:28:56.282 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-portend (3.0.0-1) ... 2026-03-09T14:28:56.305 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-jaraco.classes (3.2.1-3) ... 2026-03-09T14:28:56.316 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-portend (3.0.0-1) ... 2026-03-09T14:28:56.328 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-tempora (4.1.2-1) ... 2026-03-09T14:28:56.351 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-portend (3.0.0-1) ... 2026-03-09T14:28:56.363 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-tempora (4.1.2-1) ... 2026-03-09T14:28:56.374 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-jaraco.text (3.6.0-2) ... 2026-03-09T14:28:56.396 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-tempora (4.1.2-1) ... 2026-03-09T14:28:56.411 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-jaraco.text (3.6.0-2) ... 2026-03-09T14:28:56.420 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-jaraco.functools (3.4.0-2) ... 2026-03-09T14:28:56.442 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-jaraco.text (3.6.0-2) ... 2026-03-09T14:28:56.456 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-jaraco.functools (3.4.0-2) ... 2026-03-09T14:28:56.465 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T14:28:56.488 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-jaraco.functools (3.4.0-2) ... 2026-03-09T14:28:56.504 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T14:28:56.534 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T14:28:56.588 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T14:28:56.626 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T14:28:56.646 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-logutils (0.3.3-8) ... 2026-03-09T14:28:56.656 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T14:28:56.684 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-logutils (0.3.3-8) ... 2026-03-09T14:28:56.692 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T14:28:56.714 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-logutils (0.3.3-8) ... 2026-03-09T14:28:56.733 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T14:28:56.746 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-natsort (8.0.2-1) ... 2026-03-09T14:28:56.762 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T14:28:56.782 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-natsort (8.0.2-1) ... 2026-03-09T14:28:56.794 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T14:28:56.809 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-natsort (8.0.2-1) ... 2026-03-09T14:28:56.829 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T14:28:56.849 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-prettytable (2.5.0-2) ... 2026-03-09T14:28:56.855 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T14:28:56.884 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-prettytable (2.5.0-2) ... 2026-03-09T14:28:56.894 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-psutil (5.9.0-1build1) ... 2026-03-09T14:28:56.910 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-prettytable (2.5.0-2) ... 2026-03-09T14:28:56.929 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-psutil (5.9.0-1build1) ... 2026-03-09T14:28:56.942 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-pyinotify (0.9.6-1.3) ... 2026-03-09T14:28:56.953 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-psutil (5.9.0-1build1) ... 2026-03-09T14:28:56.977 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-pyinotify (0.9.6-1.3) ... 2026-03-09T14:28:56.987 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T14:28:57.001 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-pyinotify (0.9.6-1.3) ... 2026-03-09T14:28:57.023 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T14:28:57.033 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-repoze.lru (0.7-2) ... 2026-03-09T14:28:57.045 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T14:28:57.072 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-repoze.lru (0.7-2) ... 2026-03-09T14:28:57.080 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T14:28:57.094 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-repoze.lru (0.7-2) ... 2026-03-09T14:28:57.118 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T14:28:57.129 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-rsa (4.8-1) ... 2026-03-09T14:28:57.139 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T14:28:57.166 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-rsa (4.8-1) ... 2026-03-09T14:28:57.178 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-simplegeneric (0.8.1-3) ... 2026-03-09T14:28:57.185 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-rsa (4.8-1) ... 2026-03-09T14:28:57.219 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-simplegeneric (0.8.1-3) ... 2026-03-09T14:28:57.224 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-simplejson (3.17.6-1build1) ... 2026-03-09T14:28:57.235 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-simplegeneric (0.8.1-3) ... 2026-03-09T14:28:57.265 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-simplejson (3.17.6-1build1) ... 2026-03-09T14:28:57.275 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-singledispatch (3.4.0.3-3) ... 2026-03-09T14:28:57.282 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-simplejson (3.17.6-1build1) ... 2026-03-09T14:28:57.315 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-singledispatch (3.4.0.3-3) ... 2026-03-09T14:28:57.323 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T14:28:57.333 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-singledispatch (3.4.0.3-3) ... 2026-03-09T14:28:57.337 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T14:28:57.362 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T14:28:57.377 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T14:28:57.382 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T14:28:57.385 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-threadpoolctl (3.1.0-1) ... 2026-03-09T14:28:57.395 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T14:28:57.423 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-threadpoolctl (3.1.0-1) ... 2026-03-09T14:28:57.431 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T14:28:57.440 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-threadpoolctl (3.1.0-1) ... 2026-03-09T14:28:57.467 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T14:28:57.479 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T14:28:57.485 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T14:28:57.515 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T14:28:57.533 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T14:28:57.536 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T14:28:57.572 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T14:28:57.583 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-websocket (1.2.3-1) ... 2026-03-09T14:28:57.589 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T14:28:57.620 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-websocket (1.2.3-1) ... 2026-03-09T14:28:57.633 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T14:28:57.635 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-websocket (1.2.3-1) ... 2026-03-09T14:28:57.672 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T14:28:57.683 INFO:teuthology.orchestra.run.vm05.stdout:Removing python3-zc.lockfile (2.0-1) ... 2026-03-09T14:28:57.687 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T14:28:57.721 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-zc.lockfile (2.0-1) ... 2026-03-09T14:28:57.727 INFO:teuthology.orchestra.run.vm05.stdout:Removing qttranslations5-l10n (5.15.3-1) ... 2026-03-09T14:28:57.736 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-zc.lockfile (2.0-1) ... 2026-03-09T14:28:57.748 INFO:teuthology.orchestra.run.vm05.stdout:Removing smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T14:28:57.765 INFO:teuthology.orchestra.run.vm04.stdout:Removing qttranslations5-l10n (5.15.3-1) ... 2026-03-09T14:28:57.781 INFO:teuthology.orchestra.run.vm03.stdout:Removing qttranslations5-l10n (5.15.3-1) ... 2026-03-09T14:28:57.786 INFO:teuthology.orchestra.run.vm04.stdout:Removing smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T14:28:57.801 INFO:teuthology.orchestra.run.vm03.stdout:Removing smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T14:28:58.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:57 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:58.096 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:57 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:58.096 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:57 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:58.096 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:57 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:58.096 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:57 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:58.128 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:57 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:58.128 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:57 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:58.128 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:57 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:58.128 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:57 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:58.138 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:57 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:58.138 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:57 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:58.138 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:57 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:58.138 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:57 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:58.139 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:57 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:58.180 INFO:teuthology.orchestra.run.vm05.stdout:Removing socat (1.7.4.1-3ubuntu4) ... 2026-03-09T14:28:58.193 INFO:teuthology.orchestra.run.vm05.stdout:Removing unzip (6.0-26ubuntu3.2) ... 2026-03-09T14:28:58.209 INFO:teuthology.orchestra.run.vm04.stdout:Removing socat (1.7.4.1-3ubuntu4) ... 2026-03-09T14:28:58.212 INFO:teuthology.orchestra.run.vm05.stdout:Removing xmlstarlet (1.6.1-2.1) ... 2026-03-09T14:28:58.220 INFO:teuthology.orchestra.run.vm03.stdout:Removing socat (1.7.4.1-3ubuntu4) ... 2026-03-09T14:28:58.220 INFO:teuthology.orchestra.run.vm04.stdout:Removing unzip (6.0-26ubuntu3.2) ... 2026-03-09T14:28:58.230 INFO:teuthology.orchestra.run.vm05.stdout:Removing zip (3.0-12build2) ... 2026-03-09T14:28:58.232 INFO:teuthology.orchestra.run.vm03.stdout:Removing unzip (6.0-26ubuntu3.2) ... 2026-03-09T14:28:58.239 INFO:teuthology.orchestra.run.vm04.stdout:Removing xmlstarlet (1.6.1-2.1) ... 2026-03-09T14:28:58.252 INFO:teuthology.orchestra.run.vm03.stdout:Removing xmlstarlet (1.6.1-2.1) ... 2026-03-09T14:28:58.254 INFO:teuthology.orchestra.run.vm05.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T14:28:58.256 INFO:teuthology.orchestra.run.vm04.stdout:Removing zip (3.0-12build2) ... 2026-03-09T14:28:58.268 INFO:teuthology.orchestra.run.vm03.stdout:Removing zip (3.0-12build2) ... 2026-03-09T14:28:58.279 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T14:28:58.297 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T14:28:58.309 INFO:teuthology.orchestra.run.vm05.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-09T14:28:58.316 INFO:teuthology.orchestra.run.vm05.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T14:28:58.338 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-09T14:28:58.345 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T14:28:58.356 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-09T14:28:58.363 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T14:28:58.505 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:28:58 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:58.505 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 09 14:28:58 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:58.505 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 09 14:28:58 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:58.505 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:28:58 vm04 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:58.505 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 09 14:28:58 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:58.505 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 09 14:28:58 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:58.505 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 09 14:28:58 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:58.505 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 09 14:28:58 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:58.505 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:28:58 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:58.551 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:28:58 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:58.551 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 09 14:28:58 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:58.551 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:28:58 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:58.551 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:28:58 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:58.551 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:28:58 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:28:59.327 INFO:teuthology.orchestra.run.vm05.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:59.330 DEBUG:teuthology.parallel:result is None 2026-03-09T14:28:59.360 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:59.362 DEBUG:teuthology.parallel:result is None 2026-03-09T14:28:59.390 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T14:28:59.392 DEBUG:teuthology.parallel:result is None 2026-03-09T14:28:59.392 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm03.local 2026-03-09T14:28:59.392 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm04.local 2026-03-09T14:28:59.392 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm05.local 2026-03-09T14:28:59.392 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f /etc/apt/sources.list.d/ceph.list 2026-03-09T14:28:59.392 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f /etc/apt/sources.list.d/ceph.list 2026-03-09T14:28:59.392 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f /etc/apt/sources.list.d/ceph.list 2026-03-09T14:28:59.399 DEBUG:teuthology.orchestra.run.vm05:> sudo apt-get update 2026-03-09T14:28:59.410 DEBUG:teuthology.orchestra.run.vm04:> sudo apt-get update 2026-03-09T14:28:59.440 DEBUG:teuthology.orchestra.run.vm03:> sudo apt-get update 2026-03-09T14:28:59.584 INFO:teuthology.orchestra.run.vm04.stdout:Hit:1 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-09T14:28:59.585 INFO:teuthology.orchestra.run.vm04.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-09T14:28:59.590 INFO:teuthology.orchestra.run.vm05.stdout:Hit:1 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-09T14:28:59.592 INFO:teuthology.orchestra.run.vm04.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-09T14:28:59.593 INFO:teuthology.orchestra.run.vm05.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-09T14:28:59.601 INFO:teuthology.orchestra.run.vm05.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-09T14:28:59.691 INFO:teuthology.orchestra.run.vm04.stdout:Hit:4 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-09T14:28:59.945 INFO:teuthology.orchestra.run.vm05.stdout:Hit:4 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-09T14:28:59.980 INFO:teuthology.orchestra.run.vm03.stdout:Hit:1 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-09T14:28:59.986 INFO:teuthology.orchestra.run.vm03.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-09T14:29:00.088 INFO:teuthology.orchestra.run.vm03.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-09T14:29:00.191 INFO:teuthology.orchestra.run.vm03.stdout:Hit:4 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-09T14:29:00.453 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T14:29:00.465 DEBUG:teuthology.parallel:result is None 2026-03-09T14:29:00.726 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-09T14:29:00.738 DEBUG:teuthology.parallel:result is None 2026-03-09T14:29:00.972 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T14:29:00.984 DEBUG:teuthology.parallel:result is None 2026-03-09T14:29:00.984 DEBUG:teuthology.run_tasks:Unwinding manager cephadm 2026-03-09T14:29:00.986 INFO:tasks.cephadm:Teardown begin 2026-03-09T14:29:00.986 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T14:29:00.992 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T14:29:00.999 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T14:29:01.007 INFO:tasks.cephadm:Disabling cephadm mgr module 2026-03-09T14:29:01.007 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 -- ceph mgr module disable cephadm 2026-03-09T14:29:02.134 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/mon.a/config 2026-03-09T14:29:02.457 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-09T14:29:02.454+0000 7fd112ac2640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-09T14:29:02.457 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-09T14:29:02.454+0000 7fd112ac2640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-09T14:29:02.457 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-09T14:29:02.454+0000 7fd112ac2640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-09T14:29:02.457 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-09T14:29:02.454+0000 7fd112ac2640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-09T14:29:02.457 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-09T14:29:02.454+0000 7fd112ac2640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-09T14:29:02.458 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-09T14:29:02.454+0000 7fd112ac2640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-09T14:29:02.458 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-09T14:29:02.454+0000 7fd112ac2640 -1 monclient: keyring not found 2026-03-09T14:29:02.458 INFO:teuthology.orchestra.run.vm03.stderr:[errno 21] error connecting to the cluster 2026-03-09T14:29:02.495 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T14:29:02.495 INFO:tasks.cephadm:Cleaning up testdir ceph.* files... 2026-03-09T14:29:02.495 DEBUG:teuthology.orchestra.run.vm03:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-09T14:29:02.498 DEBUG:teuthology.orchestra.run.vm04:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-09T14:29:02.501 DEBUG:teuthology.orchestra.run.vm05:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-09T14:29:02.504 INFO:tasks.cephadm:Stopping all daemons... 2026-03-09T14:29:02.504 INFO:tasks.cephadm.mon.a:Stopping mon.a... 2026-03-09T14:29:02.504 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@mon.a 2026-03-09T14:29:02.550 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@mon.a.service' 2026-03-09T14:29:02.603 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T14:29:02.603 INFO:tasks.cephadm.mon.a:Stopped mon.a 2026-03-09T14:29:02.603 INFO:tasks.cephadm.mon.c:Stopping mon.b... 2026-03-09T14:29:02.603 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@mon.b 2026-03-09T14:29:02.612 DEBUG:teuthology.orchestra.run.vm04:> sudo pkill -f 'journalctl -f -n 0 -u ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@mon.b.service' 2026-03-09T14:29:02.664 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T14:29:02.664 INFO:tasks.cephadm.mon.c:Stopped mon.b 2026-03-09T14:29:02.664 INFO:tasks.cephadm.mon.c:Stopping mon.c... 2026-03-09T14:29:02.664 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@mon.c 2026-03-09T14:29:02.673 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@mon.c.service' 2026-03-09T14:29:02.724 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T14:29:02.724 INFO:tasks.cephadm.mon.c:Stopped mon.c 2026-03-09T14:29:02.724 INFO:tasks.cephadm.mgr.x:Stopping mgr.x... 2026-03-09T14:29:02.724 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@mgr.x 2026-03-09T14:29:02.733 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@mgr.x.service' 2026-03-09T14:29:02.783 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T14:29:02.783 INFO:tasks.cephadm.mgr.x:Stopped mgr.x 2026-03-09T14:29:02.783 INFO:tasks.cephadm.osd.0:Stopping osd.0... 2026-03-09T14:29:02.783 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@osd.0 2026-03-09T14:29:02.834 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@osd.0.service' 2026-03-09T14:29:02.887 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T14:29:02.887 INFO:tasks.cephadm.osd.0:Stopped osd.0 2026-03-09T14:29:02.887 INFO:tasks.cephadm.osd.1:Stopping osd.1... 2026-03-09T14:29:02.887 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@osd.1 2026-03-09T14:29:02.938 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@osd.1.service' 2026-03-09T14:29:02.990 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T14:29:02.990 INFO:tasks.cephadm.osd.1:Stopped osd.1 2026-03-09T14:29:02.991 INFO:tasks.cephadm.osd.2:Stopping osd.2... 2026-03-09T14:29:02.991 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@osd.2 2026-03-09T14:29:03.000 DEBUG:teuthology.orchestra.run.vm04:> sudo pkill -f 'journalctl -f -n 0 -u ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@osd.2.service' 2026-03-09T14:29:03.052 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T14:29:03.052 INFO:tasks.cephadm.osd.2:Stopped osd.2 2026-03-09T14:29:03.052 INFO:tasks.cephadm.osd.3:Stopping osd.3... 2026-03-09T14:29:03.052 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@osd.3 2026-03-09T14:29:03.103 DEBUG:teuthology.orchestra.run.vm04:> sudo pkill -f 'journalctl -f -n 0 -u ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@osd.3.service' 2026-03-09T14:29:03.156 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T14:29:03.156 INFO:tasks.cephadm.osd.3:Stopped osd.3 2026-03-09T14:29:03.156 INFO:tasks.cephadm.osd.4:Stopping osd.4... 2026-03-09T14:29:03.156 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@osd.4 2026-03-09T14:29:03.207 DEBUG:teuthology.orchestra.run.vm04:> sudo pkill -f 'journalctl -f -n 0 -u ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@osd.4.service' 2026-03-09T14:29:03.268 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T14:29:03.268 INFO:tasks.cephadm.osd.4:Stopped osd.4 2026-03-09T14:29:03.268 INFO:tasks.cephadm.osd.5:Stopping osd.5... 2026-03-09T14:29:03.268 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@osd.5 2026-03-09T14:29:03.277 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@osd.5.service' 2026-03-09T14:29:03.327 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T14:29:03.327 INFO:tasks.cephadm.osd.5:Stopped osd.5 2026-03-09T14:29:03.327 INFO:tasks.cephadm.osd.6:Stopping osd.6... 2026-03-09T14:29:03.327 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@osd.6 2026-03-09T14:29:03.378 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@osd.6.service' 2026-03-09T14:29:03.431 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T14:29:03.431 INFO:tasks.cephadm.osd.6:Stopped osd.6 2026-03-09T14:29:03.431 INFO:tasks.cephadm.osd.7:Stopping osd.7... 2026-03-09T14:29:03.431 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@osd.7 2026-03-09T14:29:03.481 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@osd.7.service' 2026-03-09T14:29:03.531 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T14:29:03.531 INFO:tasks.cephadm.osd.7:Stopped osd.7 2026-03-09T14:29:03.531 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 --force --keep-logs 2026-03-09T14:29:03.612 INFO:teuthology.orchestra.run.vm03.stdout:Deleting cluster with fsid: 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:29:04.949 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:29:04 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:29:05.237 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:29:04 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:29:05.237 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:29:05 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:29:05.237 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:29:05 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:29:05.237 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:29:05 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:29:05.237 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 09 14:29:05 vm03 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:29:06.163 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 --force --keep-logs 2026-03-09T14:29:06.246 INFO:teuthology.orchestra.run.vm04.stdout:Deleting cluster with fsid: 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:29:08.446 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 --force --keep-logs 2026-03-09T14:29:08.530 INFO:teuthology.orchestra.run.vm05.stdout:Deleting cluster with fsid: 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:29:09.869 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:29:09 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:29:10.254 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:29:09 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:29:10.254 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:29:10 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:29:10.254 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:29:10 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:29:10.255 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:29:10 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:29:10.255 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:29:10 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:29:10.255 INFO:journalctl@ceph.iscsi.iscsi.b.vm05.stdout:Mar 09 14:29:10 vm05 systemd[1]: /etc/systemd/system/ceph-3346de4a-1bc2-11f1-95ae-3796c8433614@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:29:11.140 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T14:29:11.147 INFO:teuthology.orchestra.run.vm03.stderr:rm: cannot remove '/etc/ceph/ceph.client.admin.keyring': Is a directory 2026-03-09T14:29:11.147 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T14:29:11.148 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T14:29:11.155 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T14:29:11.162 INFO:tasks.cephadm:Archiving crash dumps... 2026-03-09T14:29:11.163 DEBUG:teuthology.misc:Transferring archived files from vm03:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/crash to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/502/remote/vm03/crash 2026-03-09T14:29:11.163 DEBUG:teuthology.orchestra.run.vm03:> sudo tar c -f - -C /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/crash -- . 2026-03-09T14:29:11.195 INFO:teuthology.orchestra.run.vm03.stderr:tar: /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/crash: Cannot open: No such file or directory 2026-03-09T14:29:11.195 INFO:teuthology.orchestra.run.vm03.stderr:tar: Error is not recoverable: exiting now 2026-03-09T14:29:11.196 DEBUG:teuthology.misc:Transferring archived files from vm04:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/crash to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/502/remote/vm04/crash 2026-03-09T14:29:11.196 DEBUG:teuthology.orchestra.run.vm04:> sudo tar c -f - -C /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/crash -- . 2026-03-09T14:29:11.205 INFO:teuthology.orchestra.run.vm04.stderr:tar: /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/crash: Cannot open: No such file or directory 2026-03-09T14:29:11.205 INFO:teuthology.orchestra.run.vm04.stderr:tar: Error is not recoverable: exiting now 2026-03-09T14:29:11.206 DEBUG:teuthology.misc:Transferring archived files from vm05:/var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/crash to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/502/remote/vm05/crash 2026-03-09T14:29:11.206 DEBUG:teuthology.orchestra.run.vm05:> sudo tar c -f - -C /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/crash -- . 2026-03-09T14:29:11.214 INFO:teuthology.orchestra.run.vm05.stderr:tar: /var/lib/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/crash: Cannot open: No such file or directory 2026-03-09T14:29:11.214 INFO:teuthology.orchestra.run.vm05.stderr:tar: Error is not recoverable: exiting now 2026-03-09T14:29:11.214 INFO:tasks.cephadm:Checking cluster log for badness... 2026-03-09T14:29:11.214 DEBUG:teuthology.orchestra.run.vm03:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/ceph.log | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v MON_DOWN | head -n 1 2026-03-09T14:29:11.245 INFO:teuthology.orchestra.run.vm03.stderr:grep: /var/log/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/ceph.log: No such file or directory 2026-03-09T14:29:11.245 WARNING:tasks.cephadm:Found errors (ERR|WRN|SEC) in cluster log 2026-03-09T14:29:11.245 INFO:tasks.cephadm:Compressing logs... 2026-03-09T14:29:11.246 DEBUG:teuthology.orchestra.run.vm03:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T14:29:11.291 DEBUG:teuthology.orchestra.run.vm04:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T14:29:11.292 DEBUG:teuthology.orchestra.run.vm05:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T14:29:11.296 INFO:teuthology.orchestra.run.vm03.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-09T14:29:11.296 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-09T14:29:11.297 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/ceph-volume.log 2026-03-09T14:29:11.297 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/cephadm.log: 87.6% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-09T14:29:11.297 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/ceph-volume.log: 75.1% -- replaced with /var/log/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/ceph-volume.log.gz 2026-03-09T14:29:11.298 INFO:teuthology.orchestra.run.vm05.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-09T14:29:11.298 INFO:teuthology.orchestra.run.vm03.stderr: 2026-03-09T14:29:11.298 INFO:teuthology.orchestra.run.vm03.stderr:real 0m0.006s 2026-03-09T14:29:11.298 INFO:teuthology.orchestra.run.vm03.stderr:user 0m0.005s 2026-03-09T14:29:11.298 INFO:teuthology.orchestra.run.vm03.stderr:sys 0m0.004s 2026-03-09T14:29:11.299 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-09T14:29:11.299 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/ceph-volume.log 2026-03-09T14:29:11.299 INFO:teuthology.orchestra.run.vm04.stderr:find: gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-09T14:29:11.299 INFO:teuthology.orchestra.run.vm04.stderr:‘/var/log/rbd-target-api’: No such file or directory 2026-03-09T14:29:11.299 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/cephadm.log: 90.4% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-09T14:29:11.299 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/ceph-volume.log: 79.8% -- replaced with /var/log/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/ceph-volume.log.gz 2026-03-09T14:29:11.299 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /var/log/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/ceph-volume.log 2026-03-09T14:29:11.300 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/cephadm.log: 89.4% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-09T14:29:11.300 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/ceph-volume.log: 79.8% -- replaced with /var/log/ceph/3346de4a-1bc2-11f1-95ae-3796c8433614/ceph-volume.log.gz 2026-03-09T14:29:11.300 INFO:teuthology.orchestra.run.vm05.stderr: 2026-03-09T14:29:11.300 INFO:teuthology.orchestra.run.vm05.stderr:real 0m0.006s 2026-03-09T14:29:11.300 INFO:teuthology.orchestra.run.vm05.stderr:user 0m0.008s 2026-03-09T14:29:11.300 INFO:teuthology.orchestra.run.vm05.stderr:sys 0m0.001s 2026-03-09T14:29:11.301 INFO:teuthology.orchestra.run.vm04.stderr: 2026-03-09T14:29:11.301 INFO:teuthology.orchestra.run.vm04.stderr:real 0m0.008s 2026-03-09T14:29:11.301 INFO:teuthology.orchestra.run.vm04.stderr:user 0m0.008s 2026-03-09T14:29:11.301 INFO:teuthology.orchestra.run.vm04.stderr:sys 0m0.004s 2026-03-09T14:29:11.301 INFO:tasks.cephadm:Archiving logs... 2026-03-09T14:29:11.301 DEBUG:teuthology.misc:Transferring archived files from vm03:/var/log/ceph to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/502/remote/vm03/log 2026-03-09T14:29:11.301 DEBUG:teuthology.orchestra.run.vm03:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-09T14:29:11.348 DEBUG:teuthology.misc:Transferring archived files from vm04:/var/log/ceph to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/502/remote/vm04/log 2026-03-09T14:29:11.348 DEBUG:teuthology.orchestra.run.vm04:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-09T14:29:11.355 DEBUG:teuthology.misc:Transferring archived files from vm05:/var/log/ceph to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/502/remote/vm05/log 2026-03-09T14:29:11.356 DEBUG:teuthology.orchestra.run.vm05:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-09T14:29:11.361 INFO:tasks.cephadm:Removing cluster... 2026-03-09T14:29:11.361 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 --force 2026-03-09T14:29:11.475 INFO:teuthology.orchestra.run.vm03.stdout:Deleting cluster with fsid: 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:29:12.534 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 --force 2026-03-09T14:29:12.619 INFO:teuthology.orchestra.run.vm04.stdout:Deleting cluster with fsid: 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:29:13.682 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 3346de4a-1bc2-11f1-95ae-3796c8433614 --force 2026-03-09T14:29:13.772 INFO:teuthology.orchestra.run.vm05.stdout:Deleting cluster with fsid: 3346de4a-1bc2-11f1-95ae-3796c8433614 2026-03-09T14:29:14.828 INFO:tasks.cephadm:Removing cephadm ... 2026-03-09T14:29:14.828 DEBUG:teuthology.orchestra.run.vm03:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-09T14:29:14.831 DEBUG:teuthology.orchestra.run.vm04:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-09T14:29:14.834 DEBUG:teuthology.orchestra.run.vm05:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-09T14:29:14.843 INFO:tasks.cephadm:Teardown complete 2026-03-09T14:29:14.843 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-09T14:29:14.845 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-09T14:29:14.845 DEBUG:teuthology.orchestra.run.vm03:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T14:29:14.874 DEBUG:teuthology.orchestra.run.vm04:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T14:29:14.875 DEBUG:teuthology.orchestra.run.vm05:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T14:29:15.599 INFO:teuthology.orchestra.run.vm04.stdout: remote refid st t when poll reach delay offset jitter 2026-03-09T14:29:15.599 INFO:teuthology.orchestra.run.vm04.stdout:============================================================================== 2026-03-09T14:29:15.599 INFO:teuthology.orchestra.run.vm04.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:29:15.599 INFO:teuthology.orchestra.run.vm04.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:29:15.599 INFO:teuthology.orchestra.run.vm04.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:29:15.599 INFO:teuthology.orchestra.run.vm04.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:29:15.599 INFO:teuthology.orchestra.run.vm04.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:29:15.599 INFO:teuthology.orchestra.run.vm04.stdout:+185.252.140.125 216.239.35.4 2 u 40 64 377 25.043 +0.212 1.255 2026-03-09T14:29:15.599 INFO:teuthology.orchestra.run.vm04.stdout:-s7.vonderste.in 131.188.3.222 2 u 44 64 377 28.728 -1.693 2.059 2026-03-09T14:29:15.599 INFO:teuthology.orchestra.run.vm04.stdout:#ernie.gerger-ne 213.172.96.14 3 u 42 64 377 31.826 +0.298 1.290 2026-03-09T14:29:15.599 INFO:teuthology.orchestra.run.vm04.stdout:#hermes.linxx.pa 185.131.196.23 2 u 46 64 377 28.290 -4.894 1.554 2026-03-09T14:29:15.599 INFO:teuthology.orchestra.run.vm04.stdout:*mx10.edv-hueske 189.97.54.122 2 u 43 64 377 28.699 -0.580 1.269 2026-03-09T14:29:15.599 INFO:teuthology.orchestra.run.vm04.stdout:#185.13.148.71 79.133.44.146 2 u 40 64 377 31.944 -1.163 1.303 2026-03-09T14:29:15.599 INFO:teuthology.orchestra.run.vm04.stdout:+139-162-156-95. 82.35.162.146 2 u 40 64 377 22.449 -7.160 2.481 2026-03-09T14:29:15.599 INFO:teuthology.orchestra.run.vm04.stdout:#ns8.starka.st 79.133.44.139 2 u 34 64 377 22.688 -2.438 2.396 2026-03-09T14:29:15.599 INFO:teuthology.orchestra.run.vm04.stdout:+ns1.2053.net 131.188.3.222 2 u 46 64 377 24.955 +0.050 0.912 2026-03-09T14:29:15.599 INFO:teuthology.orchestra.run.vm04.stdout:+router02.i-tk.d 192.168.125.22 2 u 43 64 377 44.856 -1.916 2.098 2026-03-09T14:29:15.599 INFO:teuthology.orchestra.run.vm04.stdout:+139-162-187-236 80.192.165.246 2 u 49 64 377 22.543 -5.288 0.688 2026-03-09T14:29:15.599 INFO:teuthology.orchestra.run.vm04.stdout:#139-162-152-20. 82.35.162.146 2 u 40 64 377 22.595 -5.107 1.072 2026-03-09T14:29:15.599 INFO:teuthology.orchestra.run.vm04.stdout:+static.241.200. 168.239.11.197 2 u 34 64 377 25.017 +1.455 1.971 2026-03-09T14:29:15.599 INFO:teuthology.orchestra.run.vm04.stdout:+185.125.190.58 145.238.80.80 2 u 5 64 377 35.292 -0.536 1.691 2026-03-09T14:29:15.599 INFO:teuthology.orchestra.run.vm04.stdout:#mail.morbitzer. 205.46.178.169 2 u 38 64 377 28.235 -5.440 2.233 2026-03-09T14:29:15.599 INFO:teuthology.orchestra.run.vm04.stdout:#ntp1.as213151.n 222.217.153.8 2 u 38 64 377 28.504 -10.838 15.869 2026-03-09T14:29:15.599 INFO:teuthology.orchestra.run.vm04.stdout:-185.125.190.57 194.121.207.249 2 u 62 64 377 36.549 -0.320 1.086 2026-03-09T14:29:15.613 INFO:teuthology.orchestra.run.vm05.stdout: remote refid st t when poll reach delay offset jitter 2026-03-09T14:29:15.613 INFO:teuthology.orchestra.run.vm05.stdout:============================================================================== 2026-03-09T14:29:15.613 INFO:teuthology.orchestra.run.vm05.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:29:15.614 INFO:teuthology.orchestra.run.vm05.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:29:15.614 INFO:teuthology.orchestra.run.vm05.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:29:15.614 INFO:teuthology.orchestra.run.vm05.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:29:15.614 INFO:teuthology.orchestra.run.vm05.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:29:15.614 INFO:teuthology.orchestra.run.vm05.stdout:+ns8.starka.st 79.133.44.139 2 u 39 64 377 22.689 -1.924 1.339 2026-03-09T14:29:15.614 INFO:teuthology.orchestra.run.vm05.stdout:#hermes.linxx.pa 185.131.196.23 2 u 38 64 377 28.205 -5.905 1.677 2026-03-09T14:29:15.614 INFO:teuthology.orchestra.run.vm05.stdout:+ernie.gerger-ne 213.172.96.14 3 u 30 64 377 31.846 -0.620 1.262 2026-03-09T14:29:15.614 INFO:teuthology.orchestra.run.vm05.stdout:+mx10.edv-hueske 189.97.54.122 2 u 37 64 377 26.782 +0.030 1.619 2026-03-09T14:29:15.614 INFO:teuthology.orchestra.run.vm05.stdout:+s7.vonderste.in 131.188.3.222 2 u 39 64 377 28.690 -3.399 1.855 2026-03-09T14:29:15.614 INFO:teuthology.orchestra.run.vm05.stdout:#ntp1.as213151.n 222.217.153.8 2 u 45 64 377 28.838 -23.150 27.630 2026-03-09T14:29:15.614 INFO:teuthology.orchestra.run.vm05.stdout:#139-162-152-20. 82.35.162.146 2 u 42 64 377 22.821 -7.276 2.175 2026-03-09T14:29:15.614 INFO:teuthology.orchestra.run.vm05.stdout:*static.241.200. 168.239.11.197 2 u 36 64 377 25.010 -1.948 1.861 2026-03-09T14:29:15.614 INFO:teuthology.orchestra.run.vm05.stdout:+185.252.140.125 216.239.35.4 2 u 37 64 377 25.100 -2.394 1.899 2026-03-09T14:29:15.614 INFO:teuthology.orchestra.run.vm05.stdout:#139-162-156-95. 82.35.162.146 2 u 39 64 377 22.582 -6.932 1.767 2026-03-09T14:29:15.614 INFO:teuthology.orchestra.run.vm05.stdout:+ns1.2053.net 131.188.3.222 2 u 42 64 377 24.912 -1.834 1.836 2026-03-09T14:29:15.614 INFO:teuthology.orchestra.run.vm05.stdout:+alphyn.canonica 132.163.96.1 2 u 1 64 317 97.343 -2.279 3.057 2026-03-09T14:29:15.614 INFO:teuthology.orchestra.run.vm05.stdout:+185.125.190.58 145.238.80.80 2 u 63 64 377 32.043 -2.331 2.268 2026-03-09T14:29:15.614 INFO:teuthology.orchestra.run.vm05.stdout:+185.13.148.71 79.133.44.146 2 u 30 64 377 31.946 -2.208 1.896 2026-03-09T14:29:15.614 INFO:teuthology.orchestra.run.vm05.stdout:+185.125.190.56 79.243.60.50 2 u 58 64 377 35.135 -3.912 2.242 2026-03-09T14:29:15.614 INFO:teuthology.orchestra.run.vm05.stdout:+185.125.190.57 194.121.207.249 2 u 1 64 377 36.396 -2.505 1.461 2026-03-09T14:29:15.614 INFO:teuthology.orchestra.run.vm03.stdout: remote refid st t when poll reach delay offset jitter 2026-03-09T14:29:15.614 INFO:teuthology.orchestra.run.vm03.stdout:============================================================================== 2026-03-09T14:29:15.614 INFO:teuthology.orchestra.run.vm03.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:29:15.614 INFO:teuthology.orchestra.run.vm03.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:29:15.614 INFO:teuthology.orchestra.run.vm03.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:29:15.614 INFO:teuthology.orchestra.run.vm03.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:29:15.614 INFO:teuthology.orchestra.run.vm03.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:29:15.614 INFO:teuthology.orchestra.run.vm03.stdout:#hermes.linxx.pa 185.131.196.23 2 u 43 64 377 28.279 -5.377 1.378 2026-03-09T14:29:15.614 INFO:teuthology.orchestra.run.vm03.stdout:+s7.vonderste.in 131.188.3.222 2 u 46 64 377 28.738 -0.055 3.464 2026-03-09T14:29:15.614 INFO:teuthology.orchestra.run.vm03.stdout:+ernie.gerger-ne 213.172.96.14 3 u 40 64 377 31.862 -1.035 1.014 2026-03-09T14:29:15.615 INFO:teuthology.orchestra.run.vm03.stdout:#139-162-156-95. 82.43.52.28 2 u 46 64 377 22.784 -4.540 1.625 2026-03-09T14:29:15.615 INFO:teuthology.orchestra.run.vm03.stdout:+185.252.140.125 216.239.35.4 2 u 50 64 377 25.072 -2.362 1.877 2026-03-09T14:29:15.615 INFO:teuthology.orchestra.run.vm03.stdout:#185.13.148.71 79.133.44.146 2 u 39 64 377 31.882 -2.141 1.581 2026-03-09T14:29:15.615 INFO:teuthology.orchestra.run.vm03.stdout:+ns8.starka.st 79.133.44.139 2 u 43 64 377 22.645 -2.037 0.977 2026-03-09T14:29:15.615 INFO:teuthology.orchestra.run.vm03.stdout:+ns1.2053.net 131.188.3.222 2 u 34 64 377 24.919 +0.815 1.736 2026-03-09T14:29:15.615 INFO:teuthology.orchestra.run.vm03.stdout:+mx10.edv-hueske 189.97.54.122 2 u 38 64 377 26.903 -1.219 1.102 2026-03-09T14:29:15.615 INFO:teuthology.orchestra.run.vm03.stdout:#139-162-152-20. 82.35.162.146 2 u 45 64 377 22.542 -4.472 1.620 2026-03-09T14:29:15.615 INFO:teuthology.orchestra.run.vm03.stdout:#ntp1.as213151.n 222.217.153.8 2 u 41 64 377 28.728 -11.341 15.571 2026-03-09T14:29:15.615 INFO:teuthology.orchestra.run.vm03.stdout:#alphyn.canonica 132.163.96.1 2 u - 64 367 97.362 -1.920 1.285 2026-03-09T14:29:15.615 INFO:teuthology.orchestra.run.vm03.stdout:+static.241.200. 168.239.11.197 2 u 37 64 377 24.994 +0.184 1.260 2026-03-09T14:29:15.615 INFO:teuthology.orchestra.run.vm03.stdout:+185.125.190.57 194.121.207.249 2 u 58 64 377 35.234 -2.020 1.232 2026-03-09T14:29:15.615 INFO:teuthology.orchestra.run.vm03.stdout:*185.125.190.56 79.243.60.50 2 u 60 64 377 33.260 -0.578 1.239 2026-03-09T14:29:15.615 INFO:teuthology.orchestra.run.vm03.stdout:+185.125.190.58 145.238.80.80 2 u 5 64 377 32.074 -0.818 1.188 2026-03-09T14:29:15.615 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-09T14:29:15.617 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-09T14:29:15.617 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-09T14:29:15.619 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-09T14:29:15.621 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-09T14:29:15.624 INFO:teuthology.task.internal:Duration was 1077.918522 seconds 2026-03-09T14:29:15.624 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-09T14:29:15.626 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-09T14:29:15.626 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-09T14:29:15.628 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-09T14:29:15.629 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-09T14:29:15.657 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-09T14:29:15.657 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm03.local 2026-03-09T14:29:15.657 DEBUG:teuthology.orchestra.run.vm03:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-09T14:29:15.704 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm04.local 2026-03-09T14:29:15.704 DEBUG:teuthology.orchestra.run.vm04:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-09T14:29:15.717 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm05.local 2026-03-09T14:29:15.717 DEBUG:teuthology.orchestra.run.vm05:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-09T14:29:15.727 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-09T14:29:15.727 DEBUG:teuthology.orchestra.run.vm03:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T14:29:15.746 DEBUG:teuthology.orchestra.run.vm04:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T14:29:15.759 DEBUG:teuthology.orchestra.run.vm05:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T14:29:15.812 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-09T14:29:15.812 DEBUG:teuthology.orchestra.run.vm03:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T14:29:15.813 DEBUG:teuthology.orchestra.run.vm04:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T14:29:15.819 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T14:29:15.819 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T14:29:15.819 INFO:teuthology.orchestra.run.vm03.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0%gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T14:29:15.819 INFO:teuthology.orchestra.run.vm03.stderr: -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-09T14:29:15.820 INFO:teuthology.orchestra.run.vm03.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-09T14:29:15.829 INFO:teuthology.orchestra.run.vm03.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 91.1% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-09T14:29:15.843 DEBUG:teuthology.orchestra.run.vm05:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T14:29:15.849 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T14:29:15.849 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T14:29:15.850 INFO:teuthology.orchestra.run.vm04.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T14:29:15.850 INFO:teuthology.orchestra.run.vm04.stderr: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-09T14:29:15.850 INFO:teuthology.orchestra.run.vm04.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-09T14:29:15.858 INFO:teuthology.orchestra.run.vm04.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 90.9% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-09T14:29:15.861 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T14:29:15.861 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T14:29:15.861 INFO:teuthology.orchestra.run.vm05.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-09T14:29:15.861 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T14:29:15.861 INFO:teuthology.orchestra.run.vm05.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: /home/ubuntu/cephtest/archive/syslog/journalctl.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-09T14:29:15.869 INFO:teuthology.orchestra.run.vm05.stderr: 91.2% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-09T14:29:15.870 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-09T14:29:15.881 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-09T14:29:15.881 DEBUG:teuthology.orchestra.run.vm03:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-09T14:29:15.888 DEBUG:teuthology.orchestra.run.vm04:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-09T14:29:15.909 DEBUG:teuthology.orchestra.run.vm05:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-09T14:29:15.921 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-09T14:29:15.926 DEBUG:teuthology.orchestra.run.vm03:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-09T14:29:15.930 DEBUG:teuthology.orchestra.run.vm04:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-09T14:29:15.936 INFO:teuthology.orchestra.run.vm03.stdout:kernel.core_pattern = core 2026-03-09T14:29:15.951 DEBUG:teuthology.orchestra.run.vm05:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-09T14:29:15.957 INFO:teuthology.orchestra.run.vm04.stdout:kernel.core_pattern = core 2026-03-09T14:29:15.968 INFO:teuthology.orchestra.run.vm05.stdout:kernel.core_pattern = core 2026-03-09T14:29:15.975 DEBUG:teuthology.orchestra.run.vm03:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-09T14:29:15.988 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T14:29:15.988 DEBUG:teuthology.orchestra.run.vm04:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-09T14:29:16.009 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T14:29:16.009 DEBUG:teuthology.orchestra.run.vm05:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-09T14:29:16.020 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T14:29:16.021 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-09T14:29:16.026 INFO:teuthology.task.internal:Transferring archived files... 2026-03-09T14:29:16.026 DEBUG:teuthology.misc:Transferring archived files from vm03:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/502/remote/vm03 2026-03-09T14:29:16.027 DEBUG:teuthology.orchestra.run.vm03:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-09T14:29:16.038 DEBUG:teuthology.misc:Transferring archived files from vm04:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/502/remote/vm04 2026-03-09T14:29:16.038 DEBUG:teuthology.orchestra.run.vm04:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-09T14:29:16.058 DEBUG:teuthology.misc:Transferring archived files from vm05:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/502/remote/vm05 2026-03-09T14:29:16.058 DEBUG:teuthology.orchestra.run.vm05:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-09T14:29:16.071 INFO:teuthology.task.internal:Removing archive directory... 2026-03-09T14:29:16.071 DEBUG:teuthology.orchestra.run.vm03:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-09T14:29:16.082 DEBUG:teuthology.orchestra.run.vm04:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-09T14:29:16.099 DEBUG:teuthology.orchestra.run.vm05:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-09T14:29:16.118 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-09T14:29:16.130 INFO:teuthology.task.internal:Not uploading archives. 2026-03-09T14:29:16.130 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-09T14:29:16.142 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-09T14:29:16.142 DEBUG:teuthology.orchestra.run.vm03:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-09T14:29:16.143 DEBUG:teuthology.orchestra.run.vm04:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-09T14:29:16.144 DEBUG:teuthology.orchestra.run.vm05:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-09T14:29:16.145 INFO:teuthology.orchestra.run.vm03.stdout: 258077 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 9 14:29 /home/ubuntu/cephtest 2026-03-09T14:29:16.146 INFO:teuthology.orchestra.run.vm04.stdout: 258078 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 9 14:29 /home/ubuntu/cephtest 2026-03-09T14:29:16.161 INFO:teuthology.orchestra.run.vm05.stdout: 258079 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 9 14:29 /home/ubuntu/cephtest 2026-03-09T14:29:16.161 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-09T14:29:16.204 INFO:teuthology.run:Summary data: description: orch/cephadm/rbd_iscsi/{base/install cluster/{fixed-3 openstack} conf/{disable-pool-app} supported-container-hosts$/{ubuntu_22.04} workloads/cephadm_iscsi} duration: 1077.9185218811035 failure_reason: 'Command failed on vm03 with status 1: ''CEPH_REF=master CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /home/ubuntu/cephtest/virtualenv/bin/cram -v -- /home/ubuntu/cephtest/archive/cram.client.0/*.t''' flavor: default owner: kyr sentry_event: null status: fail success: false 2026-03-09T14:29:16.204 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-09T14:29:16.256 INFO:teuthology.run:FAIL