2026-03-06T23:34:40.221 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-06T23:34:40.226 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-06T23:34:40.247 INFO:teuthology.run:Config: archive_path: /archive/irq0-2026-03-06_20:21:59-orch:cephadm:smoke-roleless-cobaltcore-storage-v19.2.3-fasttrack-5-none-default-vps/412 branch: cobaltcore-storage-v19.2.3-fasttrack-5 description: orch:cephadm:smoke-roleless/{0-distro/ubuntu_22.04 1-start 2-services/jaeger 3-final} email: null first_in_suite: false flavor: default job_id: '412' ktype: distro last_in_suite: false machine_type: vps name: irq0-2026-03-06_20:21:59-orch:cephadm:smoke-roleless-cobaltcore-storage-v19.2.3-fasttrack-5-none-default-vps no_nested_subset: false openstack: - volumes: count: 4 size: 10 os_type: ubuntu os_version: '22.04' overrides: admin_socket: branch: cobaltcore-storage-v19.2.3-fasttrack-5 ansible.cephlab: branch: main repo: https://github.com/kshtsk/ceph-cm-ansible.git skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: Europe/Berlin ceph: conf: mgr: debug mgr: 20 debug ms: 1 mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug ms: 1 debug osd: 20 osd mclock iops capacity threshold hdd: 49000 osd shutdown pgref assert: true flavor: default log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - CEPHADM_DAEMON_PLACE_FAIL - CEPHADM_FAILED_DAEMON log-only-match: - CEPHADM_ sha1: 340d3c24fc6ae7529322dc7ccee6c6cb2589da0a ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} cephadm: cephadm_binary_url: https://download.ceph.com/rpm-19.2.3/el9/noarch/cephadm containers: image: harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 install: ceph: flavor: default sha1: 340d3c24fc6ae7529322dc7ccee6c6cb2589da0a extra_system_packages: deb: - python3-xmltodict - s3cmd rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - s3cmd repos: - name: ceph-source priority: 1 url: https://s3.clyso.com/ces-packages/components/ceph/rpm-19.2.3-39-g340d3c24fc6/el9.clyso/SRPMS - name: ceph-noarch priority: 1 url: https://s3.clyso.com/ces-packages/components/ceph/rpm-19.2.3-39-g340d3c24fc6/el9.clyso/noarch - name: ceph priority: 1 url: https://s3.clyso.com/ces-packages/components/ceph/rpm-19.2.3-39-g340d3c24fc6/el9.clyso/x86_64 workunit: branch: tt-19.2.3-fasttrack-5-no-nvme-loop sha1: b952d7263a165ada4530724b87fab57a8f3f547b owner: irq0 priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - host.a - client.0 - - host.b - client.1 seed: 9421 sha1: 340d3c24fc6ae7529322dc7ccee6c6cb2589da0a sleep_before_teardown: 0 suite: orch:cephadm:smoke-roleless suite_branch: tt-19.2.3-fasttrack-5-no-nvme-loop suite_path: /home/teuthos/src/github.com_kshtsk_ceph_b952d7263a165ada4530724b87fab57a8f3f547b/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: b952d7263a165ada4530724b87fab57a8f3f547b targets: vm02.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJfhoNzXdlnUIlNZCAiSjuHRys0fsGnIGOIXbzJMUSiiFrrnPKPx0BnO+NsGO6kjOIBnrv+MiErWlnfPqxo/SoE= vm07.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLYipwNTDHUqXUui0CDCjglK6vK1IKAfsO6HbcyUI5uAxBrvnJJa4lQ8SSEucp1Ld4/9Y1QxlkpgdoijjOMAD2U= tasks: - cephadm: roleless: true - cephadm.shell: host.a: - ceph orch status - ceph orch ps - ceph orch ls - ceph orch host ls - ceph orch device ls - cephadm.shell: host.a: - ceph orch apply jaeger - cephadm.wait_for_service: service: elasticsearch - cephadm.wait_for_service: service: jaeger-collector - cephadm.wait_for_service: service: jaeger-query - cephadm.wait_for_service: service: jaeger-agent - cephadm.shell: host.a: - stat -c '%u %g' /var/log/ceph | grep '167 167' - ceph orch status - ceph orch ps - ceph orch ls - ceph orch host ls - ceph orch device ls - ceph orch ls | grep '^osd.all-available-devices ' teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-06_20:21:59 tube: vps user: irq0 verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.43333 2026-03-06T23:34:40.247 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_b952d7263a165ada4530724b87fab57a8f3f547b/qa; will attempt to use it 2026-03-06T23:34:40.247 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_b952d7263a165ada4530724b87fab57a8f3f547b/qa/tasks 2026-03-06T23:34:40.247 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-06T23:34:40.248 INFO:teuthology.task.internal:Saving configuration 2026-03-06T23:34:40.253 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-06T23:34:40.253 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-06T23:34:40.260 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm02.local', 'description': '/archive/irq0-2026-03-06_20:21:59-orch:cephadm:smoke-roleless-cobaltcore-storage-v19.2.3-fasttrack-5-none-default-vps/412', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-06 22:33:31.522005', 'locked_by': 'irq0', 'mac_address': '52:55:00:00:00:02', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJfhoNzXdlnUIlNZCAiSjuHRys0fsGnIGOIXbzJMUSiiFrrnPKPx0BnO+NsGO6kjOIBnrv+MiErWlnfPqxo/SoE='} 2026-03-06T23:34:40.265 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm07.local', 'description': '/archive/irq0-2026-03-06_20:21:59-orch:cephadm:smoke-roleless-cobaltcore-storage-v19.2.3-fasttrack-5-none-default-vps/412', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-06 22:33:31.522435', 'locked_by': 'irq0', 'mac_address': '52:55:00:00:00:07', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLYipwNTDHUqXUui0CDCjglK6vK1IKAfsO6HbcyUI5uAxBrvnJJa4lQ8SSEucp1Ld4/9Y1QxlkpgdoijjOMAD2U='} 2026-03-06T23:34:40.265 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-06T23:34:40.266 INFO:teuthology.task.internal:roles: ubuntu@vm02.local - ['host.a', 'client.0'] 2026-03-06T23:34:40.266 INFO:teuthology.task.internal:roles: ubuntu@vm07.local - ['host.b', 'client.1'] 2026-03-06T23:34:40.266 INFO:teuthology.run_tasks:Running task console_log... 2026-03-06T23:34:40.272 DEBUG:teuthology.task.console_log:vm02 does not support IPMI; excluding 2026-03-06T23:34:40.277 DEBUG:teuthology.task.console_log:vm07 does not support IPMI; excluding 2026-03-06T23:34:40.277 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7ffa39faff40>, signals=[15]) 2026-03-06T23:34:40.277 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-06T23:34:40.277 INFO:teuthology.task.internal:Opening connections... 2026-03-06T23:34:40.277 DEBUG:teuthology.task.internal:connecting to ubuntu@vm02.local 2026-03-06T23:34:40.278 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm02.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-06T23:34:40.338 DEBUG:teuthology.task.internal:connecting to ubuntu@vm07.local 2026-03-06T23:34:40.339 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm07.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-06T23:34:40.399 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-06T23:34:40.401 DEBUG:teuthology.orchestra.run.vm02:> uname -m 2026-03-06T23:34:40.409 INFO:teuthology.orchestra.run.vm02.stdout:x86_64 2026-03-06T23:34:40.410 DEBUG:teuthology.orchestra.run.vm02:> cat /etc/os-release 2026-03-06T23:34:40.454 INFO:teuthology.orchestra.run.vm02.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-06T23:34:40.455 INFO:teuthology.orchestra.run.vm02.stdout:NAME="Ubuntu" 2026-03-06T23:34:40.455 INFO:teuthology.orchestra.run.vm02.stdout:VERSION_ID="22.04" 2026-03-06T23:34:40.455 INFO:teuthology.orchestra.run.vm02.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-06T23:34:40.455 INFO:teuthology.orchestra.run.vm02.stdout:VERSION_CODENAME=jammy 2026-03-06T23:34:40.455 INFO:teuthology.orchestra.run.vm02.stdout:ID=ubuntu 2026-03-06T23:34:40.455 INFO:teuthology.orchestra.run.vm02.stdout:ID_LIKE=debian 2026-03-06T23:34:40.455 INFO:teuthology.orchestra.run.vm02.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-06T23:34:40.455 INFO:teuthology.orchestra.run.vm02.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-06T23:34:40.455 INFO:teuthology.orchestra.run.vm02.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-06T23:34:40.455 INFO:teuthology.orchestra.run.vm02.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-06T23:34:40.455 INFO:teuthology.orchestra.run.vm02.stdout:UBUNTU_CODENAME=jammy 2026-03-06T23:34:40.455 INFO:teuthology.lock.ops:Updating vm02.local on lock server 2026-03-06T23:34:40.459 DEBUG:teuthology.orchestra.run.vm07:> uname -m 2026-03-06T23:34:40.467 INFO:teuthology.orchestra.run.vm07.stdout:x86_64 2026-03-06T23:34:40.467 DEBUG:teuthology.orchestra.run.vm07:> cat /etc/os-release 2026-03-06T23:34:40.516 INFO:teuthology.orchestra.run.vm07.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-06T23:34:40.516 INFO:teuthology.orchestra.run.vm07.stdout:NAME="Ubuntu" 2026-03-06T23:34:40.516 INFO:teuthology.orchestra.run.vm07.stdout:VERSION_ID="22.04" 2026-03-06T23:34:40.516 INFO:teuthology.orchestra.run.vm07.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-06T23:34:40.517 INFO:teuthology.orchestra.run.vm07.stdout:VERSION_CODENAME=jammy 2026-03-06T23:34:40.517 INFO:teuthology.orchestra.run.vm07.stdout:ID=ubuntu 2026-03-06T23:34:40.517 INFO:teuthology.orchestra.run.vm07.stdout:ID_LIKE=debian 2026-03-06T23:34:40.517 INFO:teuthology.orchestra.run.vm07.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-06T23:34:40.517 INFO:teuthology.orchestra.run.vm07.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-06T23:34:40.517 INFO:teuthology.orchestra.run.vm07.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-06T23:34:40.517 INFO:teuthology.orchestra.run.vm07.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-06T23:34:40.517 INFO:teuthology.orchestra.run.vm07.stdout:UBUNTU_CODENAME=jammy 2026-03-06T23:34:40.517 INFO:teuthology.lock.ops:Updating vm07.local on lock server 2026-03-06T23:34:40.521 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-06T23:34:40.523 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-06T23:34:40.524 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-06T23:34:40.524 DEBUG:teuthology.orchestra.run.vm02:> test '!' -e /home/ubuntu/cephtest 2026-03-06T23:34:40.525 DEBUG:teuthology.orchestra.run.vm07:> test '!' -e /home/ubuntu/cephtest 2026-03-06T23:34:40.561 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-06T23:34:40.562 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-06T23:34:40.562 DEBUG:teuthology.orchestra.run.vm02:> test -z $(ls -A /var/lib/ceph) 2026-03-06T23:34:40.568 DEBUG:teuthology.orchestra.run.vm07:> test -z $(ls -A /var/lib/ceph) 2026-03-06T23:34:40.570 INFO:teuthology.orchestra.run.vm02.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-06T23:34:40.605 INFO:teuthology.orchestra.run.vm07.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-06T23:34:40.605 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-06T23:34:40.617 DEBUG:teuthology.orchestra.run.vm02:> test -e /ceph-qa-ready 2026-03-06T23:34:40.620 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-06T23:34:40.850 DEBUG:teuthology.orchestra.run.vm07:> test -e /ceph-qa-ready 2026-03-06T23:34:40.852 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-06T23:34:41.082 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-06T23:34:41.084 INFO:teuthology.task.internal:Creating test directory... 2026-03-06T23:34:41.084 DEBUG:teuthology.orchestra.run.vm02:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-06T23:34:41.085 DEBUG:teuthology.orchestra.run.vm07:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-06T23:34:41.088 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-06T23:34:41.089 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-06T23:34:41.090 INFO:teuthology.task.internal:Creating archive directory... 2026-03-06T23:34:41.091 DEBUG:teuthology.orchestra.run.vm02:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-06T23:34:41.132 DEBUG:teuthology.orchestra.run.vm07:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-06T23:34:41.137 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-06T23:34:41.138 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-06T23:34:41.138 DEBUG:teuthology.orchestra.run.vm02:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-06T23:34:41.178 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-06T23:34:41.178 DEBUG:teuthology.orchestra.run.vm07:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-06T23:34:41.181 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-06T23:34:41.181 DEBUG:teuthology.orchestra.run.vm02:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-06T23:34:41.220 DEBUG:teuthology.orchestra.run.vm07:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-06T23:34:41.227 INFO:teuthology.orchestra.run.vm02.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-06T23:34:41.229 INFO:teuthology.orchestra.run.vm07.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-06T23:34:41.232 INFO:teuthology.orchestra.run.vm02.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-06T23:34:41.234 INFO:teuthology.orchestra.run.vm07.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-06T23:34:41.235 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-06T23:34:41.236 INFO:teuthology.task.internal:Configuring sudo... 2026-03-06T23:34:41.236 DEBUG:teuthology.orchestra.run.vm02:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-06T23:34:41.276 DEBUG:teuthology.orchestra.run.vm07:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-06T23:34:41.284 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-06T23:34:41.286 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-06T23:34:41.286 DEBUG:teuthology.orchestra.run.vm02:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-06T23:34:41.324 DEBUG:teuthology.orchestra.run.vm07:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-06T23:34:41.328 DEBUG:teuthology.orchestra.run.vm02:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-06T23:34:41.370 DEBUG:teuthology.orchestra.run.vm02:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-06T23:34:41.414 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-06T23:34:41.414 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-06T23:34:41.463 DEBUG:teuthology.orchestra.run.vm07:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-06T23:34:41.466 DEBUG:teuthology.orchestra.run.vm07:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-06T23:34:41.512 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-06T23:34:41.512 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-06T23:34:41.561 DEBUG:teuthology.orchestra.run.vm02:> sudo service rsyslog restart 2026-03-06T23:34:41.562 DEBUG:teuthology.orchestra.run.vm07:> sudo service rsyslog restart 2026-03-06T23:34:41.616 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-06T23:34:41.618 INFO:teuthology.task.internal:Starting timer... 2026-03-06T23:34:41.618 INFO:teuthology.run_tasks:Running task pcp... 2026-03-06T23:34:41.620 INFO:teuthology.run_tasks:Running task selinux... 2026-03-06T23:34:41.622 INFO:teuthology.task.selinux:Excluding vm02: VMs are not yet supported 2026-03-06T23:34:41.622 INFO:teuthology.task.selinux:Excluding vm07: VMs are not yet supported 2026-03-06T23:34:41.622 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-06T23:34:41.622 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-06T23:34:41.622 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-06T23:34:41.622 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-06T23:34:41.624 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'repo': 'https://github.com/kshtsk/ceph-cm-ansible.git', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'Europe/Berlin'}} 2026-03-06T23:34:41.625 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/kshtsk/ceph-cm-ansible.git 2026-03-06T23:34:41.626 INFO:teuthology.repo_utils:Fetching github.com_kshtsk_ceph-cm-ansible_main from origin 2026-03-06T23:34:42.181 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_kshtsk_ceph-cm-ansible_main to origin/main 2026-03-06T23:34:42.187 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-06T23:34:42.188 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "Europe/Berlin"}' -i /tmp/teuth_ansible_inventorytugyhjgj --limit vm02.local,vm07.local /home/teuthos/src/github.com_kshtsk_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-06T23:36:42.613 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm02.local'), Remote(name='ubuntu@vm07.local')] 2026-03-06T23:36:42.613 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm02.local' 2026-03-06T23:36:42.614 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm02.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-06T23:36:42.674 DEBUG:teuthology.orchestra.run.vm02:> true 2026-03-06T23:36:42.892 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm02.local' 2026-03-06T23:36:42.892 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm07.local' 2026-03-06T23:36:42.892 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm07.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-06T23:36:42.953 DEBUG:teuthology.orchestra.run.vm07:> true 2026-03-06T23:36:43.172 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm07.local' 2026-03-06T23:36:43.172 INFO:teuthology.run_tasks:Running task clock... 2026-03-06T23:36:43.175 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-06T23:36:43.175 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-06T23:36:43.175 DEBUG:teuthology.orchestra.run.vm02:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-06T23:36:43.176 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-06T23:36:43.176 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-06T23:36:43.191 INFO:teuthology.orchestra.run.vm02.stdout: 6 Mar 23:36:43 ntpd[15638]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-06T23:36:43.191 INFO:teuthology.orchestra.run.vm02.stdout: 6 Mar 23:36:43 ntpd[15638]: Command line: ntpd -gq 2026-03-06T23:36:43.191 INFO:teuthology.orchestra.run.vm02.stdout: 6 Mar 23:36:43 ntpd[15638]: ---------------------------------------------------- 2026-03-06T23:36:43.191 INFO:teuthology.orchestra.run.vm02.stdout: 6 Mar 23:36:43 ntpd[15638]: ntp-4 is maintained by Network Time Foundation, 2026-03-06T23:36:43.191 INFO:teuthology.orchestra.run.vm02.stdout: 6 Mar 23:36:43 ntpd[15638]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-06T23:36:43.191 INFO:teuthology.orchestra.run.vm02.stdout: 6 Mar 23:36:43 ntpd[15638]: corporation. Support and training for ntp-4 are 2026-03-06T23:36:43.192 INFO:teuthology.orchestra.run.vm02.stdout: 6 Mar 23:36:43 ntpd[15638]: available at https://www.nwtime.org/support 2026-03-06T23:36:43.192 INFO:teuthology.orchestra.run.vm02.stdout: 6 Mar 23:36:43 ntpd[15638]: ---------------------------------------------------- 2026-03-06T23:36:43.192 INFO:teuthology.orchestra.run.vm02.stdout: 6 Mar 23:36:43 ntpd[15638]: proto: precision = 0.029 usec (-25) 2026-03-06T23:36:43.192 INFO:teuthology.orchestra.run.vm02.stdout: 6 Mar 23:36:43 ntpd[15638]: basedate set to 2022-02-04 2026-03-06T23:36:43.192 INFO:teuthology.orchestra.run.vm02.stdout: 6 Mar 23:36:43 ntpd[15638]: gps base set to 2022-02-06 (week 2196) 2026-03-06T23:36:43.192 INFO:teuthology.orchestra.run.vm02.stdout: 6 Mar 23:36:43 ntpd[15638]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-06T23:36:43.192 INFO:teuthology.orchestra.run.vm02.stdout: 6 Mar 23:36:43 ntpd[15638]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-06T23:36:43.192 INFO:teuthology.orchestra.run.vm02.stderr: 6 Mar 23:36:43 ntpd[15638]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 69 days ago 2026-03-06T23:36:43.193 INFO:teuthology.orchestra.run.vm02.stdout: 6 Mar 23:36:43 ntpd[15638]: Listen and drop on 0 v6wildcard [::]:123 2026-03-06T23:36:43.193 INFO:teuthology.orchestra.run.vm02.stdout: 6 Mar 23:36:43 ntpd[15638]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-06T23:36:43.193 INFO:teuthology.orchestra.run.vm02.stdout: 6 Mar 23:36:43 ntpd[15638]: Listen normally on 2 lo 127.0.0.1:123 2026-03-06T23:36:43.193 INFO:teuthology.orchestra.run.vm02.stdout: 6 Mar 23:36:43 ntpd[15638]: Listen normally on 3 ens3 192.168.123.102:123 2026-03-06T23:36:43.193 INFO:teuthology.orchestra.run.vm02.stdout: 6 Mar 23:36:43 ntpd[15638]: Listen normally on 4 lo [::1]:123 2026-03-06T23:36:43.193 INFO:teuthology.orchestra.run.vm02.stdout: 6 Mar 23:36:43 ntpd[15638]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:2%2]:123 2026-03-06T23:36:43.193 INFO:teuthology.orchestra.run.vm02.stdout: 6 Mar 23:36:43 ntpd[15638]: Listening on routing socket on fd #22 for interface updates 2026-03-06T23:36:43.228 INFO:teuthology.orchestra.run.vm07.stdout: 6 Mar 23:36:43 ntpd[15655]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-06T23:36:43.228 INFO:teuthology.orchestra.run.vm07.stdout: 6 Mar 23:36:43 ntpd[15655]: Command line: ntpd -gq 2026-03-06T23:36:43.228 INFO:teuthology.orchestra.run.vm07.stdout: 6 Mar 23:36:43 ntpd[15655]: ---------------------------------------------------- 2026-03-06T23:36:43.228 INFO:teuthology.orchestra.run.vm07.stdout: 6 Mar 23:36:43 ntpd[15655]: ntp-4 is maintained by Network Time Foundation, 2026-03-06T23:36:43.228 INFO:teuthology.orchestra.run.vm07.stdout: 6 Mar 23:36:43 ntpd[15655]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-06T23:36:43.228 INFO:teuthology.orchestra.run.vm07.stdout: 6 Mar 23:36:43 ntpd[15655]: corporation. Support and training for ntp-4 are 2026-03-06T23:36:43.228 INFO:teuthology.orchestra.run.vm07.stdout: 6 Mar 23:36:43 ntpd[15655]: available at https://www.nwtime.org/support 2026-03-06T23:36:43.228 INFO:teuthology.orchestra.run.vm07.stdout: 6 Mar 23:36:43 ntpd[15655]: ---------------------------------------------------- 2026-03-06T23:36:43.229 INFO:teuthology.orchestra.run.vm07.stdout: 6 Mar 23:36:43 ntpd[15655]: proto: precision = 0.029 usec (-25) 2026-03-06T23:36:43.229 INFO:teuthology.orchestra.run.vm07.stdout: 6 Mar 23:36:43 ntpd[15655]: basedate set to 2022-02-04 2026-03-06T23:36:43.229 INFO:teuthology.orchestra.run.vm07.stdout: 6 Mar 23:36:43 ntpd[15655]: gps base set to 2022-02-06 (week 2196) 2026-03-06T23:36:43.229 INFO:teuthology.orchestra.run.vm07.stdout: 6 Mar 23:36:43 ntpd[15655]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-06T23:36:43.229 INFO:teuthology.orchestra.run.vm07.stdout: 6 Mar 23:36:43 ntpd[15655]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-06T23:36:43.229 INFO:teuthology.orchestra.run.vm07.stderr: 6 Mar 23:36:43 ntpd[15655]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 69 days ago 2026-03-06T23:36:43.230 INFO:teuthology.orchestra.run.vm07.stdout: 6 Mar 23:36:43 ntpd[15655]: Listen and drop on 0 v6wildcard [::]:123 2026-03-06T23:36:43.230 INFO:teuthology.orchestra.run.vm07.stdout: 6 Mar 23:36:43 ntpd[15655]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-06T23:36:43.230 INFO:teuthology.orchestra.run.vm07.stdout: 6 Mar 23:36:43 ntpd[15655]: Listen normally on 2 lo 127.0.0.1:123 2026-03-06T23:36:43.230 INFO:teuthology.orchestra.run.vm07.stdout: 6 Mar 23:36:43 ntpd[15655]: Listen normally on 3 ens3 192.168.123.107:123 2026-03-06T23:36:43.230 INFO:teuthology.orchestra.run.vm07.stdout: 6 Mar 23:36:43 ntpd[15655]: Listen normally on 4 lo [::1]:123 2026-03-06T23:36:43.230 INFO:teuthology.orchestra.run.vm07.stdout: 6 Mar 23:36:43 ntpd[15655]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:7%2]:123 2026-03-06T23:36:43.230 INFO:teuthology.orchestra.run.vm07.stdout: 6 Mar 23:36:43 ntpd[15655]: Listening on routing socket on fd #22 for interface updates 2026-03-06T23:36:44.193 INFO:teuthology.orchestra.run.vm02.stdout: 6 Mar 23:36:44 ntpd[15638]: Soliciting pool server 90.187.112.137 2026-03-06T23:36:44.229 INFO:teuthology.orchestra.run.vm07.stdout: 6 Mar 23:36:44 ntpd[15655]: Soliciting pool server 90.187.112.137 2026-03-06T23:36:45.191 INFO:teuthology.orchestra.run.vm02.stdout: 6 Mar 23:36:45 ntpd[15638]: Soliciting pool server 5.75.181.179 2026-03-06T23:36:45.192 INFO:teuthology.orchestra.run.vm02.stdout: 6 Mar 23:36:45 ntpd[15638]: Soliciting pool server 116.202.118.202 2026-03-06T23:36:45.228 INFO:teuthology.orchestra.run.vm07.stdout: 6 Mar 23:36:45 ntpd[15655]: Soliciting pool server 5.75.181.179 2026-03-06T23:36:45.228 INFO:teuthology.orchestra.run.vm07.stdout: 6 Mar 23:36:45 ntpd[15655]: Soliciting pool server 116.202.118.202 2026-03-06T23:36:46.191 INFO:teuthology.orchestra.run.vm02.stdout: 6 Mar 23:36:46 ntpd[15638]: Soliciting pool server 176.9.8.206 2026-03-06T23:36:46.191 INFO:teuthology.orchestra.run.vm02.stdout: 6 Mar 23:36:46 ntpd[15638]: Soliciting pool server 46.41.21.10 2026-03-06T23:36:46.191 INFO:teuthology.orchestra.run.vm02.stdout: 6 Mar 23:36:46 ntpd[15638]: Soliciting pool server 144.91.126.59 2026-03-06T23:36:46.227 INFO:teuthology.orchestra.run.vm07.stdout: 6 Mar 23:36:46 ntpd[15655]: Soliciting pool server 176.9.8.206 2026-03-06T23:36:46.227 INFO:teuthology.orchestra.run.vm07.stdout: 6 Mar 23:36:46 ntpd[15655]: Soliciting pool server 46.41.21.10 2026-03-06T23:36:46.228 INFO:teuthology.orchestra.run.vm07.stdout: 6 Mar 23:36:46 ntpd[15655]: Soliciting pool server 144.91.126.59 2026-03-06T23:36:47.190 INFO:teuthology.orchestra.run.vm02.stdout: 6 Mar 23:36:47 ntpd[15638]: Soliciting pool server 185.13.148.71 2026-03-06T23:36:47.191 INFO:teuthology.orchestra.run.vm02.stdout: 6 Mar 23:36:47 ntpd[15638]: Soliciting pool server 129.250.35.251 2026-03-06T23:36:47.191 INFO:teuthology.orchestra.run.vm02.stdout: 6 Mar 23:36:47 ntpd[15638]: Soliciting pool server 45.9.61.155 2026-03-06T23:36:47.191 INFO:teuthology.orchestra.run.vm02.stdout: 6 Mar 23:36:47 ntpd[15638]: Soliciting pool server 217.154.182.60 2026-03-06T23:36:47.227 INFO:teuthology.orchestra.run.vm07.stdout: 6 Mar 23:36:47 ntpd[15655]: Soliciting pool server 185.13.148.71 2026-03-06T23:36:47.227 INFO:teuthology.orchestra.run.vm07.stdout: 6 Mar 23:36:47 ntpd[15655]: Soliciting pool server 129.250.35.251 2026-03-06T23:36:47.227 INFO:teuthology.orchestra.run.vm07.stdout: 6 Mar 23:36:47 ntpd[15655]: Soliciting pool server 45.9.61.155 2026-03-06T23:36:47.227 INFO:teuthology.orchestra.run.vm07.stdout: 6 Mar 23:36:47 ntpd[15655]: Soliciting pool server 217.154.182.60 2026-03-06T23:36:48.190 INFO:teuthology.orchestra.run.vm02.stdout: 6 Mar 23:36:48 ntpd[15638]: Soliciting pool server 85.215.189.120 2026-03-06T23:36:48.190 INFO:teuthology.orchestra.run.vm02.stdout: 6 Mar 23:36:48 ntpd[15638]: Soliciting pool server 217.91.44.17 2026-03-06T23:36:48.190 INFO:teuthology.orchestra.run.vm02.stdout: 6 Mar 23:36:48 ntpd[15638]: Soliciting pool server 85.121.52.237 2026-03-06T23:36:48.190 INFO:teuthology.orchestra.run.vm02.stdout: 6 Mar 23:36:48 ntpd[15638]: Soliciting pool server 91.189.91.157 2026-03-06T23:36:48.226 INFO:teuthology.orchestra.run.vm07.stdout: 6 Mar 23:36:48 ntpd[15655]: Soliciting pool server 85.215.189.120 2026-03-06T23:36:48.226 INFO:teuthology.orchestra.run.vm07.stdout: 6 Mar 23:36:48 ntpd[15655]: Soliciting pool server 217.91.44.17 2026-03-06T23:36:48.226 INFO:teuthology.orchestra.run.vm07.stdout: 6 Mar 23:36:48 ntpd[15655]: Soliciting pool server 85.121.52.237 2026-03-06T23:36:48.227 INFO:teuthology.orchestra.run.vm07.stdout: 6 Mar 23:36:48 ntpd[15655]: Soliciting pool server 91.189.91.157 2026-03-06T23:36:49.190 INFO:teuthology.orchestra.run.vm02.stdout: 6 Mar 23:36:49 ntpd[15638]: Soliciting pool server 185.125.190.58 2026-03-06T23:36:49.190 INFO:teuthology.orchestra.run.vm02.stdout: 6 Mar 23:36:49 ntpd[15638]: Soliciting pool server 148.251.5.46 2026-03-06T23:36:49.190 INFO:teuthology.orchestra.run.vm02.stdout: 6 Mar 23:36:49 ntpd[15638]: Soliciting pool server 178.63.52.50 2026-03-06T23:36:49.226 INFO:teuthology.orchestra.run.vm07.stdout: 6 Mar 23:36:49 ntpd[15655]: Soliciting pool server 185.125.190.58 2026-03-06T23:36:49.226 INFO:teuthology.orchestra.run.vm07.stdout: 6 Mar 23:36:49 ntpd[15655]: Soliciting pool server 148.251.5.46 2026-03-06T23:36:49.226 INFO:teuthology.orchestra.run.vm07.stdout: 6 Mar 23:36:49 ntpd[15655]: Soliciting pool server 178.63.52.50 2026-03-06T23:36:50.225 INFO:teuthology.orchestra.run.vm07.stdout: 6 Mar 23:36:50 ntpd[15655]: Soliciting pool server 185.125.190.57 2026-03-06T23:36:50.226 INFO:teuthology.orchestra.run.vm07.stdout: 6 Mar 23:36:50 ntpd[15655]: Soliciting pool server 94.16.122.152 2026-03-06T23:36:50.226 INFO:teuthology.orchestra.run.vm07.stdout: 6 Mar 23:36:50 ntpd[15655]: Soliciting pool server 2a01:4f8:251:1bce::2 2026-03-06T23:36:52.213 INFO:teuthology.orchestra.run.vm02.stdout: 6 Mar 23:36:52 ntpd[15638]: ntpd: time slew -0.000083 s 2026-03-06T23:36:52.213 INFO:teuthology.orchestra.run.vm02.stdout:ntpd: time slew -0.000083s 2026-03-06T23:36:52.231 INFO:teuthology.orchestra.run.vm02.stdout: remote refid st t when poll reach delay offset jitter 2026-03-06T23:36:52.232 INFO:teuthology.orchestra.run.vm02.stdout:============================================================================== 2026-03-06T23:36:52.232 INFO:teuthology.orchestra.run.vm02.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-06T23:36:52.232 INFO:teuthology.orchestra.run.vm02.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-06T23:36:52.232 INFO:teuthology.orchestra.run.vm02.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-06T23:36:52.232 INFO:teuthology.orchestra.run.vm02.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-06T23:36:52.232 INFO:teuthology.orchestra.run.vm02.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-06T23:36:52.248 INFO:teuthology.orchestra.run.vm07.stdout: 6 Mar 23:36:52 ntpd[15655]: ntpd: time slew +0.012463 s 2026-03-06T23:36:52.248 INFO:teuthology.orchestra.run.vm07.stdout:ntpd: time slew +0.012463s 2026-03-06T23:36:52.267 INFO:teuthology.orchestra.run.vm07.stdout: remote refid st t when poll reach delay offset jitter 2026-03-06T23:36:52.267 INFO:teuthology.orchestra.run.vm07.stdout:============================================================================== 2026-03-06T23:36:52.267 INFO:teuthology.orchestra.run.vm07.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-06T23:36:52.267 INFO:teuthology.orchestra.run.vm07.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-06T23:36:52.267 INFO:teuthology.orchestra.run.vm07.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-06T23:36:52.267 INFO:teuthology.orchestra.run.vm07.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-06T23:36:52.268 INFO:teuthology.orchestra.run.vm07.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-06T23:36:52.268 INFO:teuthology.run_tasks:Running task cephadm... 2026-03-06T23:36:52.312 INFO:tasks.cephadm:Config: {'roleless': True, 'conf': {'mgr': {'debug mgr': 20, 'debug ms': 1}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'debug ms': 1, 'debug osd': 20, 'osd mclock iops capacity threshold hdd': 49000, 'osd shutdown pgref assert': True}}, 'flavor': 'default', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', 'CEPHADM_DAEMON_PLACE_FAIL', 'CEPHADM_FAILED_DAEMON'], 'log-only-match': ['CEPHADM_'], 'sha1': '340d3c24fc6ae7529322dc7ccee6c6cb2589da0a', 'cephadm_binary_url': 'https://download.ceph.com/rpm-19.2.3/el9/noarch/cephadm', 'containers': {'image': 'harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5'}} 2026-03-06T23:36:52.312 INFO:tasks.cephadm:Provided image contains tag or digest, using it as is 2026-03-06T23:36:52.312 INFO:tasks.cephadm:Cluster image is harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 2026-03-06T23:36:52.312 INFO:tasks.cephadm:Cluster fsid is f8b8c16a-19ac-11f1-87e7-9b7402b99c44 2026-03-06T23:36:52.312 INFO:tasks.cephadm:Choosing monitor IPs and ports... 2026-03-06T23:36:52.312 INFO:tasks.cephadm:No mon roles; fabricating mons 2026-03-06T23:36:52.312 INFO:tasks.cephadm:Monitor IPs: {'mon.vm02': '192.168.123.102', 'mon.vm07': '192.168.123.107'} 2026-03-06T23:36:52.312 INFO:tasks.cephadm:Normalizing hostnames... 2026-03-06T23:36:52.312 DEBUG:teuthology.orchestra.run.vm02:> sudo hostname $(hostname -s) 2026-03-06T23:36:52.320 DEBUG:teuthology.orchestra.run.vm07:> sudo hostname $(hostname -s) 2026-03-06T23:36:52.326 INFO:tasks.cephadm:Downloading cephadm from url: https://download.ceph.com/rpm-19.2.3/el9/noarch/cephadm 2026-03-06T23:36:52.326 DEBUG:teuthology.orchestra.run.vm02:> curl --silent -L https://download.ceph.com/rpm-19.2.3/el9/noarch/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-06T23:36:53.427 INFO:teuthology.orchestra.run.vm02.stdout:-rw-rw-r-- 1 ubuntu ubuntu 787672 Mar 6 23:36 /home/ubuntu/cephtest/cephadm 2026-03-06T23:36:53.427 DEBUG:teuthology.orchestra.run.vm07:> curl --silent -L https://download.ceph.com/rpm-19.2.3/el9/noarch/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-06T23:36:54.455 INFO:teuthology.orchestra.run.vm07.stdout:-rw-rw-r-- 1 ubuntu ubuntu 787672 Mar 6 23:36 /home/ubuntu/cephtest/cephadm 2026-03-06T23:36:54.455 DEBUG:teuthology.orchestra.run.vm02:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-06T23:36:54.459 DEBUG:teuthology.orchestra.run.vm07:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-06T23:36:54.465 INFO:tasks.cephadm:Pulling image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 on all hosts... 2026-03-06T23:36:54.465 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 pull 2026-03-06T23:36:54.502 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 pull 2026-03-06T23:36:54.740 INFO:teuthology.orchestra.run.vm02.stderr:Pulling container image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5... 2026-03-06T23:36:54.747 INFO:teuthology.orchestra.run.vm07.stderr:Pulling container image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5... 2026-03-06T23:37:16.268 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-06T23:37:16.268 INFO:teuthology.orchestra.run.vm02.stdout: "ceph_version": "ceph version 19.2.3-39-g340d3c24fc6 (340d3c24fc6ae7529322dc7ccee6c6cb2589da0a) squid (stable)", 2026-03-06T23:37:16.268 INFO:teuthology.orchestra.run.vm02.stdout: "image_id": "8bccc98d839aa18345ec1336292d0452ca331737e49f12524f635044dcabcfe1", 2026-03-06T23:37:16.268 INFO:teuthology.orchestra.run.vm02.stdout: "repo_digests": [ 2026-03-06T23:37:16.268 INFO:teuthology.orchestra.run.vm02.stdout: "harbor.clyso.com/custom-ceph/ceph/ceph@sha256:ffa52c72fad7bdd2657408de9cf8d87fc2c72f716d1a00277ba13f7c12b404e0" 2026-03-06T23:37:16.268 INFO:teuthology.orchestra.run.vm02.stdout: ] 2026-03-06T23:37:16.268 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-06T23:37:16.284 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-06T23:37:16.284 INFO:teuthology.orchestra.run.vm07.stdout: "ceph_version": "ceph version 19.2.3-39-g340d3c24fc6 (340d3c24fc6ae7529322dc7ccee6c6cb2589da0a) squid (stable)", 2026-03-06T23:37:16.284 INFO:teuthology.orchestra.run.vm07.stdout: "image_id": "8bccc98d839aa18345ec1336292d0452ca331737e49f12524f635044dcabcfe1", 2026-03-06T23:37:16.284 INFO:teuthology.orchestra.run.vm07.stdout: "repo_digests": [ 2026-03-06T23:37:16.284 INFO:teuthology.orchestra.run.vm07.stdout: "harbor.clyso.com/custom-ceph/ceph/ceph@sha256:ffa52c72fad7bdd2657408de9cf8d87fc2c72f716d1a00277ba13f7c12b404e0" 2026-03-06T23:37:16.284 INFO:teuthology.orchestra.run.vm07.stdout: ] 2026-03-06T23:37:16.284 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-06T23:37:16.304 DEBUG:teuthology.orchestra.run.vm02:> sudo mkdir -p /etc/ceph 2026-03-06T23:37:16.311 DEBUG:teuthology.orchestra.run.vm07:> sudo mkdir -p /etc/ceph 2026-03-06T23:37:16.319 DEBUG:teuthology.orchestra.run.vm02:> sudo chmod 777 /etc/ceph 2026-03-06T23:37:16.360 DEBUG:teuthology.orchestra.run.vm07:> sudo chmod 777 /etc/ceph 2026-03-06T23:37:16.367 INFO:tasks.cephadm:Writing seed config... 2026-03-06T23:37:16.367 INFO:tasks.cephadm: override: [mgr] debug mgr = 20 2026-03-06T23:37:16.368 INFO:tasks.cephadm: override: [mgr] debug ms = 1 2026-03-06T23:37:16.368 INFO:tasks.cephadm: override: [mon] debug mon = 20 2026-03-06T23:37:16.368 INFO:tasks.cephadm: override: [mon] debug ms = 1 2026-03-06T23:37:16.368 INFO:tasks.cephadm: override: [mon] debug paxos = 20 2026-03-06T23:37:16.368 INFO:tasks.cephadm: override: [osd] debug ms = 1 2026-03-06T23:37:16.368 INFO:tasks.cephadm: override: [osd] debug osd = 20 2026-03-06T23:37:16.368 INFO:tasks.cephadm: override: [osd] osd mclock iops capacity threshold hdd = 49000 2026-03-06T23:37:16.368 INFO:tasks.cephadm: override: [osd] osd shutdown pgref assert = True 2026-03-06T23:37:16.368 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-06T23:37:16.368 DEBUG:teuthology.orchestra.run.vm02:> dd of=/home/ubuntu/cephtest/seed.ceph.conf 2026-03-06T23:37:16.404 DEBUG:tasks.cephadm:Final config: [global] # make logging friendly to teuthology log_to_file = true log_to_stderr = false log to journald = false mon cluster log to file = true mon cluster log file level = debug mon clock drift allowed = 1.000 # replicate across OSDs, not hosts osd crush chooseleaf type = 0 #osd pool default size = 2 osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd # enable some debugging auth debug = true ms die on old message = true ms die on bug = true debug asserts on shutdown = true # adjust warnings mon max pg per osd = 10000# >= luminous mon pg warn max object skew = 0 mon osd allow primary affinity = true mon osd allow pg remap = true mon warn on legacy crush tunables = false mon warn on crush straw calc version zero = false mon warn on no sortbitwise = false mon warn on osd down out interval zero = false mon warn on too few osds = false mon_warn_on_pool_pg_num_not_power_of_two = false # disable pg_autoscaler by default for new pools osd_pool_default_pg_autoscale_mode = off # tests delete pools mon allow pool delete = true fsid = f8b8c16a-19ac-11f1-87e7-9b7402b99c44 [osd] osd scrub load threshold = 5.0 osd scrub max interval = 600 osd mclock profile = high_recovery_ops osd recover clone overlap = true osd recovery max chunk = 1048576 osd deep scrub update digest min age = 30 osd map max advance = 10 osd memory target autotune = true # debugging osd debug shutdown = true osd debug op order = true osd debug verify stray on activate = true osd debug pg log writeout = true osd debug verify cached snaps = true osd debug verify missing on start = true osd debug misdirected ops = true osd op queue = debug_random osd op queue cut off = debug_random osd shutdown pgref assert = True bdev debug aio = true osd sloppy crc = true debug ms = 1 debug osd = 20 osd mclock iops capacity threshold hdd = 49000 [mgr] mon reweight min pgs per osd = 4 mon reweight min bytes per osd = 10 mgr/telemetry/nag = false debug mgr = 20 debug ms = 1 [mon] mon data avail warn = 5 mon mgr mkfs grace = 240 mon reweight min pgs per osd = 4 mon osd reporter subtree level = osd mon osd prime pg temp = true mon reweight min bytes per osd = 10 # rotate auth tickets quickly to exercise renewal paths auth mon ticket ttl = 660# 11m auth service ticket ttl = 240# 4m # don't complain about global id reclaim mon_warn_on_insecure_global_id_reclaim = false mon_warn_on_insecure_global_id_reclaim_allowed = false debug mon = 20 debug ms = 1 debug paxos = 20 [client.rgw] rgw cache enabled = true rgw enable ops log = true rgw enable usage log = true 2026-03-06T23:37:16.404 DEBUG:teuthology.orchestra.run.vm02:mon.vm02> sudo journalctl -f -n 0 -u ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@mon.vm02.service 2026-03-06T23:37:16.446 INFO:tasks.cephadm:Bootstrapping... 2026-03-06T23:37:16.446 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 -v bootstrap --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-ip 192.168.123.102 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring 2026-03-06T23:37:16.722 INFO:teuthology.orchestra.run.vm02.stdout:-------------------------------------------------------------------------------- 2026-03-06T23:37:16.722 INFO:teuthology.orchestra.run.vm02.stdout:cephadm ['--image', 'harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5', '-v', 'bootstrap', '--fsid', 'f8b8c16a-19ac-11f1-87e7-9b7402b99c44', '--config', '/home/ubuntu/cephtest/seed.ceph.conf', '--output-config', '/etc/ceph/ceph.conf', '--output-keyring', '/etc/ceph/ceph.client.admin.keyring', '--output-pub-ssh-key', '/home/ubuntu/cephtest/ceph.pub', '--mon-ip', '192.168.123.102', '--skip-admin-label'] 2026-03-06T23:37:16.722 INFO:teuthology.orchestra.run.vm02.stderr:Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts. 2026-03-06T23:37:16.723 INFO:teuthology.orchestra.run.vm02.stdout:Verifying podman|docker is present... 2026-03-06T23:37:16.723 INFO:teuthology.orchestra.run.vm02.stdout:Verifying lvm2 is present... 2026-03-06T23:37:16.723 INFO:teuthology.orchestra.run.vm02.stdout:Verifying time synchronization is in place... 2026-03-06T23:37:16.725 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-06T23:37:16.725 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-06T23:37:16.728 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-06T23:37:16.728 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stdout inactive 2026-03-06T23:37:16.730 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 1 from systemctl is-enabled chronyd.service 2026-03-06T23:37:16.730 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Failed to get unit file state for chronyd.service: No such file or directory 2026-03-06T23:37:16.733 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 3 from systemctl is-active chronyd.service 2026-03-06T23:37:16.733 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stdout inactive 2026-03-06T23:37:16.735 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 1 from systemctl is-enabled systemd-timesyncd.service 2026-03-06T23:37:16.735 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stdout masked 2026-03-06T23:37:16.737 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 3 from systemctl is-active systemd-timesyncd.service 2026-03-06T23:37:16.737 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stdout inactive 2026-03-06T23:37:16.739 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 1 from systemctl is-enabled ntpd.service 2026-03-06T23:37:16.740 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Failed to get unit file state for ntpd.service: No such file or directory 2026-03-06T23:37:16.742 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 3 from systemctl is-active ntpd.service 2026-03-06T23:37:16.742 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stdout inactive 2026-03-06T23:37:16.745 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stdout enabled 2026-03-06T23:37:16.748 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stdout active 2026-03-06T23:37:16.748 INFO:teuthology.orchestra.run.vm02.stdout:Unit ntp.service is enabled and running 2026-03-06T23:37:16.748 INFO:teuthology.orchestra.run.vm02.stdout:Repeating the final host check... 2026-03-06T23:37:16.748 INFO:teuthology.orchestra.run.vm02.stdout:docker (/usr/bin/docker) is present 2026-03-06T23:37:16.748 INFO:teuthology.orchestra.run.vm02.stdout:systemctl is present 2026-03-06T23:37:16.748 INFO:teuthology.orchestra.run.vm02.stdout:lvcreate is present 2026-03-06T23:37:16.751 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-06T23:37:16.751 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-06T23:37:16.753 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-06T23:37:16.753 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stdout inactive 2026-03-06T23:37:16.755 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 1 from systemctl is-enabled chronyd.service 2026-03-06T23:37:16.756 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Failed to get unit file state for chronyd.service: No such file or directory 2026-03-06T23:37:16.758 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 3 from systemctl is-active chronyd.service 2026-03-06T23:37:16.758 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stdout inactive 2026-03-06T23:37:16.760 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 1 from systemctl is-enabled systemd-timesyncd.service 2026-03-06T23:37:16.760 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stdout masked 2026-03-06T23:37:16.762 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 3 from systemctl is-active systemd-timesyncd.service 2026-03-06T23:37:16.762 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stdout inactive 2026-03-06T23:37:16.764 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 1 from systemctl is-enabled ntpd.service 2026-03-06T23:37:16.764 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Failed to get unit file state for ntpd.service: No such file or directory 2026-03-06T23:37:16.766 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 3 from systemctl is-active ntpd.service 2026-03-06T23:37:16.767 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stdout inactive 2026-03-06T23:37:16.769 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stdout enabled 2026-03-06T23:37:16.771 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stdout active 2026-03-06T23:37:16.771 INFO:teuthology.orchestra.run.vm02.stdout:Unit ntp.service is enabled and running 2026-03-06T23:37:16.772 INFO:teuthology.orchestra.run.vm02.stdout:Host looks OK 2026-03-06T23:37:16.772 INFO:teuthology.orchestra.run.vm02.stdout:Cluster fsid: f8b8c16a-19ac-11f1-87e7-9b7402b99c44 2026-03-06T23:37:16.772 INFO:teuthology.orchestra.run.vm02.stdout:Acquiring lock 139846019385328 on /run/cephadm/f8b8c16a-19ac-11f1-87e7-9b7402b99c44.lock 2026-03-06T23:37:16.772 INFO:teuthology.orchestra.run.vm02.stdout:Lock 139846019385328 acquired on /run/cephadm/f8b8c16a-19ac-11f1-87e7-9b7402b99c44.lock 2026-03-06T23:37:16.772 INFO:teuthology.orchestra.run.vm02.stdout:Verifying IP 192.168.123.102 port 3300 ... 2026-03-06T23:37:16.772 INFO:teuthology.orchestra.run.vm02.stdout:Verifying IP 192.168.123.102 port 6789 ... 2026-03-06T23:37:16.772 INFO:teuthology.orchestra.run.vm02.stdout:Base mon IP(s) is [192.168.123.102:3300, 192.168.123.102:6789], mon addrv is [v2:192.168.123.102:3300,v1:192.168.123.102:6789] 2026-03-06T23:37:16.774 INFO:teuthology.orchestra.run.vm02.stdout:/usr/sbin/ip: stdout default via 192.168.123.1 dev ens3 proto dhcp src 192.168.123.102 metric 100 2026-03-06T23:37:16.774 INFO:teuthology.orchestra.run.vm02.stdout:/usr/sbin/ip: stdout 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 2026-03-06T23:37:16.774 INFO:teuthology.orchestra.run.vm02.stdout:/usr/sbin/ip: stdout 192.168.123.0/24 dev ens3 proto kernel scope link src 192.168.123.102 metric 100 2026-03-06T23:37:16.774 INFO:teuthology.orchestra.run.vm02.stdout:/usr/sbin/ip: stdout 192.168.123.1 dev ens3 proto dhcp scope link src 192.168.123.102 metric 100 2026-03-06T23:37:16.775 INFO:teuthology.orchestra.run.vm02.stdout:/usr/sbin/ip: stdout ::1 dev lo proto kernel metric 256 pref medium 2026-03-06T23:37:16.775 INFO:teuthology.orchestra.run.vm02.stdout:/usr/sbin/ip: stdout fe80::/64 dev ens3 proto kernel metric 256 pref medium 2026-03-06T23:37:16.776 INFO:teuthology.orchestra.run.vm02.stdout:/usr/sbin/ip: stdout 1: lo: mtu 65536 state UNKNOWN qlen 1000 2026-03-06T23:37:16.776 INFO:teuthology.orchestra.run.vm02.stdout:/usr/sbin/ip: stdout inet6 ::1/128 scope host 2026-03-06T23:37:16.776 INFO:teuthology.orchestra.run.vm02.stdout:/usr/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-06T23:37:16.776 INFO:teuthology.orchestra.run.vm02.stdout:/usr/sbin/ip: stdout 2: ens3: mtu 1500 state UP qlen 1000 2026-03-06T23:37:16.776 INFO:teuthology.orchestra.run.vm02.stdout:/usr/sbin/ip: stdout inet6 fe80::5055:ff:fe00:2/64 scope link 2026-03-06T23:37:16.776 INFO:teuthology.orchestra.run.vm02.stdout:/usr/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-06T23:37:16.777 INFO:teuthology.orchestra.run.vm02.stdout:Mon IP `192.168.123.102` is in CIDR network `192.168.123.0/24` 2026-03-06T23:37:16.777 INFO:teuthology.orchestra.run.vm02.stdout:Mon IP `192.168.123.102` is in CIDR network `192.168.123.0/24` 2026-03-06T23:37:16.777 INFO:teuthology.orchestra.run.vm02.stdout:Mon IP `192.168.123.102` is in CIDR network `192.168.123.1/32` 2026-03-06T23:37:16.777 INFO:teuthology.orchestra.run.vm02.stdout:Mon IP `192.168.123.102` is in CIDR network `192.168.123.1/32` 2026-03-06T23:37:16.777 INFO:teuthology.orchestra.run.vm02.stdout:Inferred mon public CIDR from local network configuration ['192.168.123.0/24', '192.168.123.0/24', '192.168.123.1/32', '192.168.123.1/32'] 2026-03-06T23:37:16.777 INFO:teuthology.orchestra.run.vm02.stdout:Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network 2026-03-06T23:37:16.778 INFO:teuthology.orchestra.run.vm02.stdout:Pulling container image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5... 2026-03-06T23:37:17.321 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/docker: stdout cobaltcore-storage-v19.2.3-fasttrack-5: Pulling from custom-ceph/ceph/ceph 2026-03-06T23:37:17.321 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/docker: stdout Digest: sha256:ffa52c72fad7bdd2657408de9cf8d87fc2c72f716d1a00277ba13f7c12b404e0 2026-03-06T23:37:17.321 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/docker: stdout Status: Image is up to date for harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 2026-03-06T23:37:17.321 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/docker: stdout harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 2026-03-06T23:37:17.584 INFO:teuthology.orchestra.run.vm02.stdout:ceph: stdout ceph version 19.2.3-39-g340d3c24fc6 (340d3c24fc6ae7529322dc7ccee6c6cb2589da0a) squid (stable) 2026-03-06T23:37:17.584 INFO:teuthology.orchestra.run.vm02.stdout:Ceph version: ceph version 19.2.3-39-g340d3c24fc6 (340d3c24fc6ae7529322dc7ccee6c6cb2589da0a) squid (stable) 2026-03-06T23:37:17.584 INFO:teuthology.orchestra.run.vm02.stdout:Extracting ceph user uid/gid from container image... 2026-03-06T23:37:17.675 INFO:teuthology.orchestra.run.vm02.stdout:stat: stdout 167 167 2026-03-06T23:37:17.675 INFO:teuthology.orchestra.run.vm02.stdout:Creating initial keys... 2026-03-06T23:37:17.767 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-authtool: stdout AQAdV6tpsmDvKxAABJeHpXNklwsRvMWw8JgWlQ== 2026-03-06T23:37:17.870 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-authtool: stdout AQAdV6tp4un8MRAA5dDe/7EZ3UC0h5ALcAw0iA== 2026-03-06T23:37:17.967 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-authtool: stdout AQAdV6tpPXHLNxAAbB5oIYrK2/EuJsccBrWNow== 2026-03-06T23:37:17.967 INFO:teuthology.orchestra.run.vm02.stdout:Creating initial monmap... 2026-03-06T23:37:18.080 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-06T23:37:18.080 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/monmaptool: stdout setting min_mon_release = quincy 2026-03-06T23:37:18.080 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: set fsid to f8b8c16a-19ac-11f1-87e7-9b7402b99c44 2026-03-06T23:37:18.080 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-06T23:37:18.080 INFO:teuthology.orchestra.run.vm02.stdout:monmaptool for vm02 [v2:192.168.123.102:3300,v1:192.168.123.102:6789] on /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-06T23:37:18.080 INFO:teuthology.orchestra.run.vm02.stdout:setting min_mon_release = quincy 2026-03-06T23:37:18.080 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/monmaptool: set fsid to f8b8c16a-19ac-11f1-87e7-9b7402b99c44 2026-03-06T23:37:18.080 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-06T23:37:18.080 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:37:18.080 INFO:teuthology.orchestra.run.vm02.stdout:Creating mon... 2026-03-06T23:37:18.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.147+0000 7fb6a8943d80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-06T23:37:18.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.147+0000 7fb6a8943d80 1 imported monmap: 2026-03-06T23:37:18.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr epoch 0 2026-03-06T23:37:18.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 2026-03-06T23:37:18.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr last_changed 2026-03-06T22:37:18.048883+0000 2026-03-06T23:37:18.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr created 2026-03-06T22:37:18.048883+0000 2026-03-06T23:37:18.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr min_mon_release 17 (quincy) 2026-03-06T23:37:18.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr election_strategy: 1 2026-03-06T23:37:18.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.vm02 2026-03-06T23:37:18.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr 2026-03-06T23:37:18.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.147+0000 7fb6a8943d80 0 /usr/bin/ceph-mon: set fsid to f8b8c16a-19ac-11f1-87e7-9b7402b99c44 2026-03-06T23:37:18.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: RocksDB version: 7.9.2 2026-03-06T23:37:18.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr 2026-03-06T23:37:18.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Git sha 0 2026-03-06T23:37:18.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Compile date 2026-03-06 13:52:12 2026-03-06T23:37:18.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: DB SUMMARY 2026-03-06T23:37:18.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr 2026-03-06T23:37:18.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: DB Session ID: RYSIML60XY51ORZL70QQ 2026-03-06T23:37:18.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr 2026-03-06T23:37:18.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-vm02/store.db dir, Total Num: 0, files: 2026-03-06T23:37:18.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr 2026-03-06T23:37:18.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-vm02/store.db: 2026-03-06T23:37:18.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr 2026-03-06T23:37:18.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.error_if_exists: 0 2026-03-06T23:37:18.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.create_if_missing: 1 2026-03-06T23:37:18.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.paranoid_checks: 1 2026-03-06T23:37:18.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-06T23:37:18.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-06T23:37:18.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-06T23:37:18.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.env: 0x5635c2ce4ca0 2026-03-06T23:37:18.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-06T23:37:18.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.info_log: 0x5635e9e2cce0 2026-03-06T23:37:18.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-06T23:37:18.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.statistics: (nil) 2026-03-06T23:37:18.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.use_fsync: 0 2026-03-06T23:37:18.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.max_log_file_size: 0 2026-03-06T23:37:18.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-06T23:37:18.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-06T23:37:18.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-06T23:37:18.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-06T23:37:18.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.allow_fallocate: 1 2026-03-06T23:37:18.206 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.use_direct_reads: 0 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.db_log_dir: 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.wal_dir: 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.write_buffer_manager: 0x5635e9e235e0 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.unordered_write: 0 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.row_cache: None 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.wal_filter: None 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.two_write_queues: 0 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.wal_compression: 0 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.atomic_flush: 0 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.log_readahead_size: 0 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.max_background_jobs: 2 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.max_background_compactions: -1 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.max_subcompactions: 1 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.max_open_files: -1 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.max_background_flushes: -1 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Compression algorithms supported: 2026-03-06T23:37:18.207 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: kZSTD supported: 0 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: kXpressCompression supported: 0 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: kBZip2Compression supported: 0 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: kLZ4Compression supported: 1 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: kZlibCompression supported: 1 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: kSnappyCompression supported: 1 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: [db/db_impl/db_impl_open.cc:317] Creating manifest 1 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-vm02/store.db/MANIFEST-000001 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.merge_operator: 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.compaction_filter: None 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5635e9e1f400) 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr cache_index_and_filter_blocks: 1 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr cache_index_and_filter_blocks_with_high_priority: 0 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr pin_top_level_index_and_filter: 1 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr index_type: 0 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr data_block_index_type: 0 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr index_shortening: 1 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr data_block_hash_table_util_ratio: 0.750000 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr checksum: 4 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr no_block_cache: 0 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr block_cache: 0x5635e9e451f0 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr block_cache_name: BinnedLRUCache 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr block_cache_options: 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr capacity : 536870912 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr num_shard_bits : 4 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr strict_capacity_limit : 0 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr high_pri_pool_ratio: 0.000 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr block_cache_compressed: (nil) 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr persistent_cache: (nil) 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr block_size: 4096 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr block_size_deviation: 10 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr block_restart_interval: 16 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr index_block_restart_interval: 1 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr metadata_block_size: 4096 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr partition_filters: 0 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr use_delta_encoding: 1 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr filter_policy: bloomfilter 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr whole_key_filtering: 1 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr verify_compression: 0 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr read_amp_bytes_per_bit: 0 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr format_version: 5 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr enable_index_compression: 1 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr block_align: 0 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr max_auto_readahead_size: 262144 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr prepopulate_block_cache: 0 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr initial_auto_readahead_size: 8192 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr num_file_reads_for_auto_readahead: 2 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.compression: NoCompression 2026-03-06T23:37:18.208 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.num_levels: 7 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.inplace_update_support: 0 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.bloom_locality: 0 2026-03-06T23:37:18.209 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.max_successive_merges: 0 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.ttl: 2592000 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.enable_blob_files: false 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.min_blob_size: 0 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.151+0000 7fb6a8943d80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.155+0000 7fb6a8943d80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-vm02/store.db/MANIFEST-000001 succeeded,manifest_file_number is 1, next_file_number is 3, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.155+0000 7fb6a8943d80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.155+0000 7fb6a8943d80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 37293902-b27d-4c87-9956-c23730580794 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.155+0000 7fb6a8943d80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 5 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.155+0000 7fb6a8943d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5635e9e46e00 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.155+0000 7fb6a8943d80 4 rocksdb: DB pointer 0x5635e9f2a000 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.159+0000 7fb6a00cd640 4 rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.159+0000 7fb6a00cd640 4 rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr ** DB Stats ** 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr Uptime(secs): 0.0 total, 0.0 interval 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr ** Compaction Stats [default] ** 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr ** Compaction Stats [default] ** 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr Uptime(secs): 0.0 total, 0.0 interval 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr Flush(GB): cumulative 0.000, interval 0.000 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr AddFile(GB): cumulative 0.000, interval 0.000 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr AddFile(Total Files): cumulative 0, interval 0 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr AddFile(L0 Files): cumulative 0, interval 0 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr AddFile(Keys): cumulative 0, interval 0 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr Block cache BinnedLRUCache@0x5635e9e451f0#7 capacity: 512.00 MB usage: 0.00 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 7e-06 secs_since: 0 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr Block cache entry stats(count,size,portion): Misc(1,0.00 KB,0%) 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr ** File Read Latency Histogram By Level [default] ** 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.159+0000 7fb6a8943d80 4 rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work 2026-03-06T23:37:18.210 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.159+0000 7fb6a8943d80 4 rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete 2026-03-06T23:37:18.211 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-06T22:37:18.159+0000 7fb6a8943d80 0 /usr/bin/ceph-mon: created monfs at /var/lib/ceph/mon/ceph-vm02 for mon.vm02 2026-03-06T23:37:18.211 INFO:teuthology.orchestra.run.vm02.stdout:create mon.vm02 on 2026-03-06T23:37:18.551 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /etc/systemd/system/ceph.target. 2026-03-06T23:37:18.723 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44.target → /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44.target. 2026-03-06T23:37:18.723 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph.target.wants/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44.target → /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44.target. 2026-03-06T23:37:18.902 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@mon.vm02 2026-03-06T23:37:18.902 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Failed to reset failed state of unit ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@mon.vm02.service: Unit ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@mon.vm02.service not loaded. 2026-03-06T23:37:19.053 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44.target.wants/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@mon.vm02.service → /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service. 2026-03-06T23:37:19.064 INFO:teuthology.orchestra.run.vm02.stdout:firewalld does not appear to be present 2026-03-06T23:37:19.064 INFO:teuthology.orchestra.run.vm02.stdout:Not possible to enable service . firewalld.service is not available 2026-03-06T23:37:19.064 INFO:teuthology.orchestra.run.vm02.stdout:Waiting for mon to start... 2026-03-06T23:37:19.064 INFO:teuthology.orchestra.run.vm02.stdout:Waiting for mon... 2026-03-06T23:37:19.390 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:19 vm02 bash[16547]: cluster 2026-03-06T22:37:19.207558+0000 mon.vm02 (mon.0) 1 : cluster [INF] mon.vm02 is new leader, mons vm02 in quorum (ranks 0) 2026-03-06T23:37:19.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout cluster: 2026-03-06T23:37:19.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout id: f8b8c16a-19ac-11f1-87e7-9b7402b99c44 2026-03-06T23:37:19.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout health: HEALTH_OK 2026-03-06T23:37:19.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-06T23:37:19.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout services: 2026-03-06T23:37:19.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon: 1 daemons, quorum vm02 (age 0.146421s) 2026-03-06T23:37:19.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mgr: no daemons active 2026-03-06T23:37:19.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout osd: 0 osds: 0 up, 0 in 2026-03-06T23:37:19.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-06T23:37:19.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout data: 2026-03-06T23:37:19.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout pools: 0 pools, 0 pgs 2026-03-06T23:37:19.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout objects: 0 objects, 0 B 2026-03-06T23:37:19.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout usage: 0 B used, 0 B / 0 B avail 2026-03-06T23:37:19.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout pgs: 2026-03-06T23:37:19.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-06T23:37:19.408 INFO:teuthology.orchestra.run.vm02.stdout:mon is available 2026-03-06T23:37:19.409 INFO:teuthology.orchestra.run.vm02.stdout:Assimilating anything we can from ceph.conf... 2026-03-06T23:37:19.746 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-06T23:37:19.746 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout [global] 2026-03-06T23:37:19.746 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout fsid = f8b8c16a-19ac-11f1-87e7-9b7402b99c44 2026-03-06T23:37:19.746 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-06T23:37:19.746 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.102:3300,v1:192.168.123.102:6789] 2026-03-06T23:37:19.746 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-06T23:37:19.746 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-06T23:37:19.746 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-06T23:37:19.746 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-06T23:37:19.746 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-06T23:37:19.746 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-06T23:37:19.746 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-06T23:37:19.746 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-06T23:37:19.746 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout [osd] 2026-03-06T23:37:19.746 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-06T23:37:19.746 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-06T23:37:19.746 INFO:teuthology.orchestra.run.vm02.stdout:Generating new minimal ceph.conf... 2026-03-06T23:37:20.037 INFO:teuthology.orchestra.run.vm02.stdout:Restarting the monitor... 2026-03-06T23:37:20.144 INFO:teuthology.orchestra.run.vm02.stdout:Setting public_network to 192.168.123.1/32,192.168.123.0/24 in mon config section 2026-03-06T23:37:20.268 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 systemd[1]: Stopping Ceph mon.vm02 for f8b8c16a-19ac-11f1-87e7-9b7402b99c44... 2026-03-06T23:37:20.268 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[16547]: debug 2026-03-06T22:37:20.067+0000 7f0dac980640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.vm02 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-06T23:37:20.268 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[16547]: debug 2026-03-06T22:37:20.067+0000 7f0dac980640 -1 mon.vm02@0(leader) e1 *** Got Signal Terminated *** 2026-03-06T23:37:20.268 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[16929]: ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44-mon-vm02 2026-03-06T23:37:20.268 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 systemd[1]: ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@mon.vm02.service: Deactivated successfully. 2026-03-06T23:37:20.268 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 systemd[1]: Stopped Ceph mon.vm02 for f8b8c16a-19ac-11f1-87e7-9b7402b99c44. 2026-03-06T23:37:20.268 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 systemd[1]: Started Ceph mon.vm02 for f8b8c16a-19ac-11f1-87e7-9b7402b99c44. 2026-03-06T23:37:20.508 INFO:teuthology.orchestra.run.vm02.stdout:Wrote config to /etc/ceph/ceph.conf 2026-03-06T23:37:20.514 INFO:teuthology.orchestra.run.vm02.stdout:Wrote keyring to /etc/ceph/ceph.client.admin.keyring 2026-03-06T23:37:20.514 INFO:teuthology.orchestra.run.vm02.stdout:Creating mgr... 2026-03-06T23:37:20.514 INFO:teuthology.orchestra.run.vm02.stdout:Verifying port 0.0.0.0:9283 ... 2026-03-06T23:37:20.514 INFO:teuthology.orchestra.run.vm02.stdout:Verifying port 0.0.0.0:8765 ... 2026-03-06T23:37:20.514 INFO:teuthology.orchestra.run.vm02.stdout:Verifying port 0.0.0.0:8443 ... 2026-03-06T23:37:20.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.255+0000 7f61e6b64d80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-06T23:37:20.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.255+0000 7f61e6b64d80 0 ceph version 19.2.3-39-g340d3c24fc6 (340d3c24fc6ae7529322dc7ccee6c6cb2589da0a) squid (stable), process ceph-mon, pid 8 2026-03-06T23:37:20.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.255+0000 7f61e6b64d80 0 pidfile_write: ignore empty --pid-file 2026-03-06T23:37:20.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 0 load: jerasure load: lrc 2026-03-06T23:37:20.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: RocksDB version: 7.9.2 2026-03-06T23:37:20.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Git sha 0 2026-03-06T23:37:20.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Compile date 2026-03-06 13:52:12 2026-03-06T23:37:20.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: DB SUMMARY 2026-03-06T23:37:20.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: DB Session ID: JPTFD9G8P635XUTPM27M 2026-03-06T23:37:20.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: CURRENT file: CURRENT 2026-03-06T23:37:20.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-06T23:37:20.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: MANIFEST file: MANIFEST-000010 size: 179 Bytes 2026-03-06T23:37:20.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-vm02/store.db dir, Total Num: 1, files: 000008.sst 2026-03-06T23:37:20.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-vm02/store.db: 000009.log size: 75215 ; 2026-03-06T23:37:20.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.error_if_exists: 0 2026-03-06T23:37:20.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.create_if_missing: 0 2026-03-06T23:37:20.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.paranoid_checks: 1 2026-03-06T23:37:20.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-06T23:37:20.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-06T23:37:20.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-06T23:37:20.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.env: 0x557e316bbca0 2026-03-06T23:37:20.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-06T23:37:20.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.info_log: 0x557e63910500 2026-03-06T23:37:20.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-06T23:37:20.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.statistics: (nil) 2026-03-06T23:37:20.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.use_fsync: 0 2026-03-06T23:37:20.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.max_log_file_size: 0 2026-03-06T23:37:20.530 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-06T23:37:20.537 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-06T23:37:20.537 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-06T23:37:20.537 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-06T23:37:20.537 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.allow_fallocate: 1 2026-03-06T23:37:20.537 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-06T23:37:20.537 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-06T23:37:20.537 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.use_direct_reads: 0 2026-03-06T23:37:20.537 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-06T23:37:20.537 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-06T23:37:20.537 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.db_log_dir: 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.wal_dir: 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.write_buffer_manager: 0x557e63915900 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.unordered_write: 0 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.row_cache: None 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.wal_filter: None 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.two_write_queues: 0 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.wal_compression: 0 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.atomic_flush: 0 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.log_readahead_size: 0 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.max_background_jobs: 2 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.max_background_compactions: -1 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.max_subcompactions: 1 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.max_open_files: -1 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.max_background_flushes: -1 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Compression algorithms supported: 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: kZSTD supported: 0 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: kXpressCompression supported: 0 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: kBZip2Compression supported: 0 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-06T23:37:20.538 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: kLZ4Compression supported: 1 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: kZlibCompression supported: 1 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: kSnappyCompression supported: 1 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-vm02/store.db/MANIFEST-000010 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.merge_operator: 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.compaction_filter: None 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557e639104c0) 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: cache_index_and_filter_blocks: 1 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: pin_top_level_index_and_filter: 1 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: index_type: 0 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: data_block_index_type: 0 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: index_shortening: 1 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: data_block_hash_table_util_ratio: 0.750000 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: checksum: 4 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: no_block_cache: 0 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: block_cache: 0x557e639371f0 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: block_cache_name: BinnedLRUCache 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: block_cache_options: 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: capacity : 536870912 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: num_shard_bits : 4 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: strict_capacity_limit : 0 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: high_pri_pool_ratio: 0.000 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: block_cache_compressed: (nil) 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: persistent_cache: (nil) 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: block_size: 4096 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: block_size_deviation: 10 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: block_restart_interval: 16 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: index_block_restart_interval: 1 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: metadata_block_size: 4096 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: partition_filters: 0 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: use_delta_encoding: 1 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: filter_policy: bloomfilter 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: whole_key_filtering: 1 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: verify_compression: 0 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: read_amp_bytes_per_bit: 0 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: format_version: 5 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: enable_index_compression: 1 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: block_align: 0 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: max_auto_readahead_size: 262144 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: prepopulate_block_cache: 0 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: initial_auto_readahead_size: 8192 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: num_file_reads_for_auto_readahead: 2 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.compression: NoCompression 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.num_levels: 7 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-06T23:37:20.539 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.inplace_update_support: 0 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.bloom_locality: 0 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.max_successive_merges: 0 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.ttl: 2592000 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.enable_blob_files: false 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.min_blob_size: 0 2026-03-06T23:37:20.540 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.259+0000 7f61e6b64d80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.271+0000 7f61e6b64d80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-vm02/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.271+0000 7f61e6b64d80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.271+0000 7f61e6b64d80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 37293902-b27d-4c87-9956-c23730580794 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.271+0000 7f61e6b64d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1772836640273359, "job": 1, "event": "recovery_started", "wal_files": [9]} 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.271+0000 7f61e6b64d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.271+0000 7f61e6b64d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1772836640275369, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 72311, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 223, "table_properties": {"data_size": 70593, "index_size": 171, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 517, "raw_key_size": 9562, "raw_average_key_size": 49, "raw_value_size": 65187, "raw_average_value_size": 336, "num_data_blocks": 8, "num_entries": 194, "num_filter_entries": 194, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772836640, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "37293902-b27d-4c87-9956-c23730580794", "db_session_id": "JPTFD9G8P635XUTPM27M", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}} 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.271+0000 7f61e6b64d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1772836640275459, "job": 1, "event": "recovery_finished"} 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.271+0000 7f61e6b64d80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 15 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.279+0000 7f61e6b64d80 4 rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-vm02/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.279+0000 7f61e6b64d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x557e63938e00 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.279+0000 7f61e6b64d80 4 rocksdb: DB pointer 0x557e63a54000 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.279+0000 7f61e6b64d80 0 starting mon.vm02 rank 0 at public addrs [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] at bind addrs [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon_data /var/lib/ceph/mon/ceph-vm02 fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.279+0000 7f61e6b64d80 1 mon.vm02@-1(???) e1 preinit fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.279+0000 7f61e6b64d80 5 mon.vm02@-1(???).mds e0 Unable to load 'last_metadata' 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.279+0000 7f61e6b64d80 5 mon.vm02@-1(???).mds e0 Unable to load 'last_metadata' 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.279+0000 7f61e6b64d80 0 mon.vm02@-1(???).mds e1 new map 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.279+0000 7f61e6b64d80 0 mon.vm02@-1(???).mds e1 print_map 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: e1 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: btime 2026-03-06T22:37:19:212552+0000 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: enable_multiple, ever_enabled_multiple: 1,1 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: legacy client fscid: -1 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: No filesystems configured 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.279+0000 7f61e6b64d80 0 mon.vm02@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.279+0000 7f61e6b64d80 0 mon.vm02@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.279+0000 7f61e6b64d80 0 mon.vm02@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.279+0000 7f61e6b64d80 0 mon.vm02@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.279+0000 7f61e6b64d80 1 mon.vm02@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.279+0000 7f61e6b64d80 4 mon.vm02@-1(???).mgr e0 loading version 1 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.279+0000 7f61e6b64d80 4 mon.vm02@-1(???).mgr e1 active server: (0) 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.279+0000 7f61e6b64d80 4 mon.vm02@-1(???).mgr e1 mkfs or daemon transitioned to available, loading commands 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.283+0000 7f61dc92e640 4 rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: debug 2026-03-06T22:37:20.283+0000 7f61dc92e640 4 rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: ** DB Stats ** 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: ** Compaction Stats [default] ** 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: L0 2/0 72.49 KB 0.5 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 38.8 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: Sum 2/0 72.49 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 38.8 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 38.8 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: ** Compaction Stats [default] ** 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 38.8 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: Flush(GB): cumulative 0.000, interval 0.000 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-06T23:37:20.541 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: AddFile(Total Files): cumulative 0, interval 0 2026-03-06T23:37:20.542 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: AddFile(L0 Files): cumulative 0, interval 0 2026-03-06T23:37:20.542 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: AddFile(Keys): cumulative 0, interval 0 2026-03-06T23:37:20.542 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: Cumulative compaction: 0.00 GB write, 3.05 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-06T23:37:20.542 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: Interval compaction: 0.00 GB write, 3.05 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-06T23:37:20.542 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-06T23:37:20.542 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: Block cache BinnedLRUCache@0x557e639371f0#8 capacity: 512.00 MB usage: 54.70 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 7e-06 secs_since: 0 2026-03-06T23:37:20.542 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: Block cache entry stats(count,size,portion): DataBlock(8,53.64 KB,0.0102311%) FilterBlock(2,0.70 KB,0.00013411%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%) 2026-03-06T23:37:20.542 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: ** File Read Latency Histogram By Level [default] ** 2026-03-06T23:37:20.542 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: cluster 2026-03-06T22:37:20.289755+0000 mon.vm02 (mon.0) 1 : cluster [INF] mon.vm02 is new leader, mons vm02 in quorum (ranks 0) 2026-03-06T23:37:20.542 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: cluster 2026-03-06T22:37:20.289755+0000 mon.vm02 (mon.0) 1 : cluster [INF] mon.vm02 is new leader, mons vm02 in quorum (ranks 0) 2026-03-06T23:37:20.542 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: cluster 2026-03-06T22:37:20.289790+0000 mon.vm02 (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-06T23:37:20.542 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: cluster 2026-03-06T22:37:20.289790+0000 mon.vm02 (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-06T23:37:20.542 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: cluster 2026-03-06T22:37:20.289794+0000 mon.vm02 (mon.0) 3 : cluster [DBG] fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 2026-03-06T23:37:20.542 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: cluster 2026-03-06T22:37:20.289794+0000 mon.vm02 (mon.0) 3 : cluster [DBG] fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 2026-03-06T23:37:20.542 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: cluster 2026-03-06T22:37:20.289796+0000 mon.vm02 (mon.0) 4 : cluster [DBG] last_changed 2026-03-06T22:37:18.048883+0000 2026-03-06T23:37:20.542 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: cluster 2026-03-06T22:37:20.289796+0000 mon.vm02 (mon.0) 4 : cluster [DBG] last_changed 2026-03-06T22:37:18.048883+0000 2026-03-06T23:37:20.542 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: cluster 2026-03-06T22:37:20.289803+0000 mon.vm02 (mon.0) 5 : cluster [DBG] created 2026-03-06T22:37:18.048883+0000 2026-03-06T23:37:20.542 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: cluster 2026-03-06T22:37:20.289803+0000 mon.vm02 (mon.0) 5 : cluster [DBG] created 2026-03-06T22:37:18.048883+0000 2026-03-06T23:37:20.542 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: cluster 2026-03-06T22:37:20.289806+0000 mon.vm02 (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-06T23:37:20.542 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: cluster 2026-03-06T22:37:20.289806+0000 mon.vm02 (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-06T23:37:20.542 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: cluster 2026-03-06T22:37:20.289808+0000 mon.vm02 (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-06T23:37:20.542 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: cluster 2026-03-06T22:37:20.289808+0000 mon.vm02 (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-06T23:37:20.542 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: cluster 2026-03-06T22:37:20.289810+0000 mon.vm02 (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.vm02 2026-03-06T23:37:20.542 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: cluster 2026-03-06T22:37:20.289810+0000 mon.vm02 (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.vm02 2026-03-06T23:37:20.542 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: cluster 2026-03-06T22:37:20.290051+0000 mon.vm02 (mon.0) 9 : cluster [DBG] fsmap 2026-03-06T23:37:20.542 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: cluster 2026-03-06T22:37:20.290051+0000 mon.vm02 (mon.0) 9 : cluster [DBG] fsmap 2026-03-06T23:37:20.542 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: cluster 2026-03-06T22:37:20.290070+0000 mon.vm02 (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-06T23:37:20.542 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: cluster 2026-03-06T22:37:20.290070+0000 mon.vm02 (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-06T23:37:20.542 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: cluster 2026-03-06T22:37:20.290749+0000 mon.vm02 (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-06T23:37:20.542 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 bash[17013]: cluster 2026-03-06T22:37:20.290749+0000 mon.vm02 (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-06T23:37:20.689 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@mgr.vm02.opvwec 2026-03-06T23:37:20.690 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Failed to reset failed state of unit ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@mgr.vm02.opvwec.service: Unit ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@mgr.vm02.opvwec.service not loaded. 2026-03-06T23:37:20.811 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:37:20.861 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44.target.wants/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@mgr.vm02.opvwec.service → /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service. 2026-03-06T23:37:20.869 INFO:teuthology.orchestra.run.vm02.stdout:firewalld does not appear to be present 2026-03-06T23:37:20.869 INFO:teuthology.orchestra.run.vm02.stdout:Not possible to enable service . firewalld.service is not available 2026-03-06T23:37:20.869 INFO:teuthology.orchestra.run.vm02.stdout:firewalld does not appear to be present 2026-03-06T23:37:20.869 INFO:teuthology.orchestra.run.vm02.stdout:Not possible to open ports <[9283, 8765, 8443]>. firewalld.service is not available 2026-03-06T23:37:20.869 INFO:teuthology.orchestra.run.vm02.stdout:Waiting for mgr to start... 2026-03-06T23:37:20.869 INFO:teuthology.orchestra.run.vm02.stdout:Waiting for mgr... 2026-03-06T23:37:21.178 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:20 vm02 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:37:21.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-06T23:37:21.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout { 2026-03-06T23:37:21.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "fsid": "f8b8c16a-19ac-11f1-87e7-9b7402b99c44", 2026-03-06T23:37:21.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "health": { 2026-03-06T23:37:21.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-06T23:37:21.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-06T23:37:21.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-06T23:37:21.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-06T23:37:21.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-06T23:37:21.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-06T23:37:21.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 0 2026-03-06T23:37:21.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-06T23:37:21.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-06T23:37:21.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "vm02" 2026-03-06T23:37:21.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-06T23:37:21.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum_age": 0, 2026-03-06T23:37:21.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-06T23:37:21.216 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "btime": "2026-03-06T22:37:19:212552+0000", 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "restful" 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "modified": "2026-03-06T22:37:19.213278+0000", 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout } 2026-03-06T23:37:21.217 INFO:teuthology.orchestra.run.vm02.stdout:mgr not available, waiting (1/15)... 2026-03-06T23:37:21.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:21 vm02 bash[17013]: audit 2026-03-06T22:37:20.458621+0000 mon.vm02 (mon.0) 12 : audit [INF] from='client.? 192.168.123.102:0/2202929384' entity='client.admin' 2026-03-06T23:37:21.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:21 vm02 bash[17013]: audit 2026-03-06T22:37:20.458621+0000 mon.vm02 (mon.0) 12 : audit [INF] from='client.? 192.168.123.102:0/2202929384' entity='client.admin' 2026-03-06T23:37:21.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:21 vm02 bash[17013]: audit 2026-03-06T22:37:21.164888+0000 mon.vm02 (mon.0) 13 : audit [DBG] from='client.? 192.168.123.102:0/4117772845' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-06T23:37:21.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:21 vm02 bash[17013]: audit 2026-03-06T22:37:21.164888+0000 mon.vm02 (mon.0) 13 : audit [DBG] from='client.? 192.168.123.102:0/4117772845' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-06T23:37:23.587 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-06T23:37:23.587 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout { 2026-03-06T23:37:23.587 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "fsid": "f8b8c16a-19ac-11f1-87e7-9b7402b99c44", 2026-03-06T23:37:23.587 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "health": { 2026-03-06T23:37:23.587 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-06T23:37:23.587 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-06T23:37:23.587 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-06T23:37:23.587 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-06T23:37:23.587 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 0 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "vm02" 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum_age": 3, 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "btime": "2026-03-06T22:37:19:212552+0000", 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "restful" 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "modified": "2026-03-06T22:37:19.213278+0000", 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout } 2026-03-06T23:37:23.588 INFO:teuthology.orchestra.run.vm02.stdout:mgr not available, waiting (2/15)... 2026-03-06T23:37:23.992 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:23 vm02 bash[17013]: audit 2026-03-06T22:37:23.510025+0000 mon.vm02 (mon.0) 14 : audit [DBG] from='client.? 192.168.123.102:0/1172062944' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-06T23:37:23.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:23 vm02 bash[17013]: audit 2026-03-06T22:37:23.510025+0000 mon.vm02 (mon.0) 14 : audit [DBG] from='client.? 192.168.123.102:0/1172062944' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout { 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "fsid": "f8b8c16a-19ac-11f1-87e7-9b7402b99c44", 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "health": { 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 0 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "vm02" 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum_age": 5, 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "btime": "2026-03-06T22:37:19:212552+0000", 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-06T23:37:25.927 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-06T23:37:25.928 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-06T23:37:25.928 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-06T23:37:25.928 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-06T23:37:25.928 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "restful" 2026-03-06T23:37:25.928 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-06T23:37:25.928 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-06T23:37:25.928 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-06T23:37:25.928 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-06T23:37:25.928 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T23:37:25.928 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "modified": "2026-03-06T22:37:19.213278+0000", 2026-03-06T23:37:25.928 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-06T23:37:25.928 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-06T23:37:25.928 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-06T23:37:25.928 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout } 2026-03-06T23:37:25.928 INFO:teuthology.orchestra.run.vm02.stdout:mgr not available, waiting (3/15)... 2026-03-06T23:37:26.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:25 vm02 bash[17013]: audit 2026-03-06T22:37:25.877110+0000 mon.vm02 (mon.0) 15 : audit [DBG] from='client.? 192.168.123.102:0/3178222155' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-06T23:37:26.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:25 vm02 bash[17013]: audit 2026-03-06T22:37:25.877110+0000 mon.vm02 (mon.0) 15 : audit [DBG] from='client.? 192.168.123.102:0/3178222155' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-06T23:37:28.407 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-06T23:37:28.407 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout { 2026-03-06T23:37:28.407 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "fsid": "f8b8c16a-19ac-11f1-87e7-9b7402b99c44", 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "health": { 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 0 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "vm02" 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum_age": 7, 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "btime": "2026-03-06T22:37:19:212552+0000", 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "restful" 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "modified": "2026-03-06T22:37:19.213278+0000", 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout } 2026-03-06T23:37:28.408 INFO:teuthology.orchestra.run.vm02.stdout:mgr not available, waiting (4/15)... 2026-03-06T23:37:28.730 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:28 vm02 bash[17013]: audit 2026-03-06T22:37:28.281201+0000 mon.vm02 (mon.0) 16 : audit [DBG] from='client.? 192.168.123.102:0/1480517210' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-06T23:37:28.731 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:28 vm02 bash[17013]: audit 2026-03-06T22:37:28.281201+0000 mon.vm02 (mon.0) 16 : audit [DBG] from='client.? 192.168.123.102:0/1480517210' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-06T23:37:30.765 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-06T23:37:30.765 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout { 2026-03-06T23:37:30.765 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "fsid": "f8b8c16a-19ac-11f1-87e7-9b7402b99c44", 2026-03-06T23:37:30.765 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "health": { 2026-03-06T23:37:30.765 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 0 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "vm02" 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum_age": 10, 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "btime": "2026-03-06T22:37:19:212552+0000", 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "restful" 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "modified": "2026-03-06T22:37:19.213278+0000", 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout } 2026-03-06T23:37:30.766 INFO:teuthology.orchestra.run.vm02.stdout:mgr not available, waiting (5/15)... 2026-03-06T23:37:30.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:30 vm02 bash[17013]: audit 2026-03-06T22:37:30.698097+0000 mon.vm02 (mon.0) 17 : audit [DBG] from='client.? 192.168.123.102:0/1281659001' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-06T23:37:30.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:30 vm02 bash[17013]: audit 2026-03-06T22:37:30.698097+0000 mon.vm02 (mon.0) 17 : audit [DBG] from='client.? 192.168.123.102:0/1281659001' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-06T23:37:32.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:31 vm02 bash[17013]: cluster 2026-03-06T22:37:31.091613+0000 mon.vm02 (mon.0) 18 : cluster [INF] Activating manager daemon vm02.opvwec 2026-03-06T23:37:32.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:31 vm02 bash[17013]: cluster 2026-03-06T22:37:31.091613+0000 mon.vm02 (mon.0) 18 : cluster [INF] Activating manager daemon vm02.opvwec 2026-03-06T23:37:32.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:31 vm02 bash[17013]: cluster 2026-03-06T22:37:31.095717+0000 mon.vm02 (mon.0) 19 : cluster [DBG] mgrmap e2: vm02.opvwec(active, starting, since 0.00420809s) 2026-03-06T23:37:32.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:31 vm02 bash[17013]: cluster 2026-03-06T22:37:31.095717+0000 mon.vm02 (mon.0) 19 : cluster [DBG] mgrmap e2: vm02.opvwec(active, starting, since 0.00420809s) 2026-03-06T23:37:32.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:31 vm02 bash[17013]: audit 2026-03-06T22:37:31.098792+0000 mon.vm02 (mon.0) 20 : audit [DBG] from='mgr.14100 192.168.123.102:0/2320646264' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-06T23:37:32.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:31 vm02 bash[17013]: audit 2026-03-06T22:37:31.098792+0000 mon.vm02 (mon.0) 20 : audit [DBG] from='mgr.14100 192.168.123.102:0/2320646264' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-06T23:37:32.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:31 vm02 bash[17013]: audit 2026-03-06T22:37:31.099124+0000 mon.vm02 (mon.0) 21 : audit [DBG] from='mgr.14100 192.168.123.102:0/2320646264' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-06T23:37:32.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:31 vm02 bash[17013]: audit 2026-03-06T22:37:31.099124+0000 mon.vm02 (mon.0) 21 : audit [DBG] from='mgr.14100 192.168.123.102:0/2320646264' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-06T23:37:32.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:31 vm02 bash[17013]: audit 2026-03-06T22:37:31.099445+0000 mon.vm02 (mon.0) 22 : audit [DBG] from='mgr.14100 192.168.123.102:0/2320646264' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-06T23:37:32.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:31 vm02 bash[17013]: audit 2026-03-06T22:37:31.099445+0000 mon.vm02 (mon.0) 22 : audit [DBG] from='mgr.14100 192.168.123.102:0/2320646264' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-06T23:37:32.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:31 vm02 bash[17013]: audit 2026-03-06T22:37:31.100840+0000 mon.vm02 (mon.0) 23 : audit [DBG] from='mgr.14100 192.168.123.102:0/2320646264' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm02"}]: dispatch 2026-03-06T23:37:32.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:31 vm02 bash[17013]: audit 2026-03-06T22:37:31.100840+0000 mon.vm02 (mon.0) 23 : audit [DBG] from='mgr.14100 192.168.123.102:0/2320646264' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm02"}]: dispatch 2026-03-06T23:37:32.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:31 vm02 bash[17013]: audit 2026-03-06T22:37:31.101181+0000 mon.vm02 (mon.0) 24 : audit [DBG] from='mgr.14100 192.168.123.102:0/2320646264' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mgr metadata", "who": "vm02.opvwec", "id": "vm02.opvwec"}]: dispatch 2026-03-06T23:37:32.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:31 vm02 bash[17013]: audit 2026-03-06T22:37:31.101181+0000 mon.vm02 (mon.0) 24 : audit [DBG] from='mgr.14100 192.168.123.102:0/2320646264' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mgr metadata", "who": "vm02.opvwec", "id": "vm02.opvwec"}]: dispatch 2026-03-06T23:37:32.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:31 vm02 bash[17013]: cluster 2026-03-06T22:37:31.106620+0000 mon.vm02 (mon.0) 25 : cluster [INF] Manager daemon vm02.opvwec is now available 2026-03-06T23:37:32.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:31 vm02 bash[17013]: cluster 2026-03-06T22:37:31.106620+0000 mon.vm02 (mon.0) 25 : cluster [INF] Manager daemon vm02.opvwec is now available 2026-03-06T23:37:32.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:31 vm02 bash[17013]: audit 2026-03-06T22:37:31.116084+0000 mon.vm02 (mon.0) 26 : audit [INF] from='mgr.14100 192.168.123.102:0/2320646264' entity='mgr.vm02.opvwec' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm02.opvwec/mirror_snapshot_schedule"}]: dispatch 2026-03-06T23:37:32.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:31 vm02 bash[17013]: audit 2026-03-06T22:37:31.116084+0000 mon.vm02 (mon.0) 26 : audit [INF] from='mgr.14100 192.168.123.102:0/2320646264' entity='mgr.vm02.opvwec' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm02.opvwec/mirror_snapshot_schedule"}]: dispatch 2026-03-06T23:37:32.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:31 vm02 bash[17013]: audit 2026-03-06T22:37:31.119712+0000 mon.vm02 (mon.0) 27 : audit [INF] from='mgr.14100 192.168.123.102:0/2320646264' entity='mgr.vm02.opvwec' 2026-03-06T23:37:32.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:31 vm02 bash[17013]: audit 2026-03-06T22:37:31.119712+0000 mon.vm02 (mon.0) 27 : audit [INF] from='mgr.14100 192.168.123.102:0/2320646264' entity='mgr.vm02.opvwec' 2026-03-06T23:37:32.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:31 vm02 bash[17013]: audit 2026-03-06T22:37:31.120817+0000 mon.vm02 (mon.0) 28 : audit [INF] from='mgr.14100 192.168.123.102:0/2320646264' entity='mgr.vm02.opvwec' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm02.opvwec/trash_purge_schedule"}]: dispatch 2026-03-06T23:37:32.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:31 vm02 bash[17013]: audit 2026-03-06T22:37:31.120817+0000 mon.vm02 (mon.0) 28 : audit [INF] from='mgr.14100 192.168.123.102:0/2320646264' entity='mgr.vm02.opvwec' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm02.opvwec/trash_purge_schedule"}]: dispatch 2026-03-06T23:37:32.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:31 vm02 bash[17013]: audit 2026-03-06T22:37:31.122989+0000 mon.vm02 (mon.0) 29 : audit [INF] from='mgr.14100 192.168.123.102:0/2320646264' entity='mgr.vm02.opvwec' 2026-03-06T23:37:32.244 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:31 vm02 bash[17013]: audit 2026-03-06T22:37:31.122989+0000 mon.vm02 (mon.0) 29 : audit [INF] from='mgr.14100 192.168.123.102:0/2320646264' entity='mgr.vm02.opvwec' 2026-03-06T23:37:32.244 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:31 vm02 bash[17013]: audit 2026-03-06T22:37:31.125743+0000 mon.vm02 (mon.0) 30 : audit [INF] from='mgr.14100 192.168.123.102:0/2320646264' entity='mgr.vm02.opvwec' 2026-03-06T23:37:32.244 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:31 vm02 bash[17013]: audit 2026-03-06T22:37:31.125743+0000 mon.vm02 (mon.0) 30 : audit [INF] from='mgr.14100 192.168.123.102:0/2320646264' entity='mgr.vm02.opvwec' 2026-03-06T23:37:33.176 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-06T23:37:33.176 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout { 2026-03-06T23:37:33.176 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "fsid": "f8b8c16a-19ac-11f1-87e7-9b7402b99c44", 2026-03-06T23:37:33.176 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "health": { 2026-03-06T23:37:33.176 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-06T23:37:33.176 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-06T23:37:33.176 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-06T23:37:33.176 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-06T23:37:33.176 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-06T23:37:33.176 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-06T23:37:33.176 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 0 2026-03-06T23:37:33.176 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-06T23:37:33.176 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-06T23:37:33.176 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "vm02" 2026-03-06T23:37:33.176 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-06T23:37:33.176 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum_age": 12, 2026-03-06T23:37:33.176 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-06T23:37:33.176 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T23:37:33.177 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-06T23:37:33.177 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-06T23:37:33.177 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-06T23:37:33.177 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-06T23:37:33.177 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T23:37:33.177 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-06T23:37:33.177 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-06T23:37:33.177 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-06T23:37:33.177 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-06T23:37:33.177 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-06T23:37:33.177 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-06T23:37:33.177 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-06T23:37:33.177 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-06T23:37:33.177 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-06T23:37:33.177 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-06T23:37:33.177 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-06T23:37:33.177 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-06T23:37:33.177 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-06T23:37:33.177 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-06T23:37:33.177 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-06T23:37:33.177 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-06T23:37:33.177 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-06T23:37:33.177 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-06T23:37:33.177 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T23:37:33.177 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "btime": "2026-03-06T22:37:19:212552+0000", 2026-03-06T23:37:33.177 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-06T23:37:33.177 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-06T23:37:33.177 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-06T23:37:33.177 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-06T23:37:33.177 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-06T23:37:33.177 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-06T23:37:33.177 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-06T23:37:33.177 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-06T23:37:33.177 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-06T23:37:33.177 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "restful" 2026-03-06T23:37:33.177 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-06T23:37:33.177 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-06T23:37:33.177 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-06T23:37:33.177 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-06T23:37:33.177 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-06T23:37:33.178 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "modified": "2026-03-06T22:37:19.213278+0000", 2026-03-06T23:37:33.178 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-06T23:37:33.178 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-06T23:37:33.178 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-06T23:37:33.178 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout } 2026-03-06T23:37:33.178 INFO:teuthology.orchestra.run.vm02.stdout:mgr is available 2026-03-06T23:37:33.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:33 vm02 bash[17013]: cluster 2026-03-06T22:37:32.101148+0000 mon.vm02 (mon.0) 31 : cluster [DBG] mgrmap e3: vm02.opvwec(active, since 1.00964s) 2026-03-06T23:37:33.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:33 vm02 bash[17013]: cluster 2026-03-06T22:37:32.101148+0000 mon.vm02 (mon.0) 31 : cluster [DBG] mgrmap e3: vm02.opvwec(active, since 1.00964s) 2026-03-06T23:37:33.559 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-06T23:37:33.559 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout [global] 2026-03-06T23:37:33.559 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout fsid = f8b8c16a-19ac-11f1-87e7-9b7402b99c44 2026-03-06T23:37:33.559 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-06T23:37:33.559 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.102:3300,v1:192.168.123.102:6789] 2026-03-06T23:37:33.559 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-06T23:37:33.559 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-06T23:37:33.559 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-06T23:37:33.559 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-06T23:37:33.559 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-06T23:37:33.559 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-06T23:37:33.559 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-06T23:37:33.559 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-06T23:37:33.559 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout [osd] 2026-03-06T23:37:33.559 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-06T23:37:33.559 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-06T23:37:33.559 INFO:teuthology.orchestra.run.vm02.stdout:Enabling cephadm module... 2026-03-06T23:37:34.485 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:34 vm02 bash[17013]: cluster 2026-03-06T22:37:33.105932+0000 mon.vm02 (mon.0) 32 : cluster [DBG] mgrmap e4: vm02.opvwec(active, since 2s) 2026-03-06T23:37:34.485 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:34 vm02 bash[17013]: cluster 2026-03-06T22:37:33.105932+0000 mon.vm02 (mon.0) 32 : cluster [DBG] mgrmap e4: vm02.opvwec(active, since 2s) 2026-03-06T23:37:34.485 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:34 vm02 bash[17013]: audit 2026-03-06T22:37:33.126137+0000 mon.vm02 (mon.0) 33 : audit [DBG] from='client.? 192.168.123.102:0/160637492' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-06T23:37:34.485 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:34 vm02 bash[17013]: audit 2026-03-06T22:37:33.126137+0000 mon.vm02 (mon.0) 33 : audit [DBG] from='client.? 192.168.123.102:0/160637492' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-06T23:37:34.485 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:34 vm02 bash[17013]: audit 2026-03-06T22:37:33.508466+0000 mon.vm02 (mon.0) 34 : audit [INF] from='client.? 192.168.123.102:0/2759985110' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-06T23:37:34.485 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:34 vm02 bash[17013]: audit 2026-03-06T22:37:33.508466+0000 mon.vm02 (mon.0) 34 : audit [INF] from='client.? 192.168.123.102:0/2759985110' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-06T23:37:34.485 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:34 vm02 bash[17013]: audit 2026-03-06T22:37:33.910836+0000 mon.vm02 (mon.0) 35 : audit [INF] from='client.? 192.168.123.102:0/4072998289' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-06T23:37:34.485 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:34 vm02 bash[17013]: audit 2026-03-06T22:37:33.910836+0000 mon.vm02 (mon.0) 35 : audit [INF] from='client.? 192.168.123.102:0/4072998289' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-06T23:37:34.945 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout { 2026-03-06T23:37:34.945 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 5, 2026-03-06T23:37:34.945 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-06T23:37:34.945 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "active_name": "vm02.opvwec", 2026-03-06T23:37:34.945 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-06T23:37:34.945 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout } 2026-03-06T23:37:34.945 INFO:teuthology.orchestra.run.vm02.stdout:Waiting for the mgr to restart... 2026-03-06T23:37:34.945 INFO:teuthology.orchestra.run.vm02.stdout:Waiting for mgr epoch 5... 2026-03-06T23:37:35.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:35 vm02 bash[17013]: audit 2026-03-06T22:37:34.431286+0000 mon.vm02 (mon.0) 36 : audit [INF] from='client.? 192.168.123.102:0/4072998289' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-06T23:37:35.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:35 vm02 bash[17013]: audit 2026-03-06T22:37:34.431286+0000 mon.vm02 (mon.0) 36 : audit [INF] from='client.? 192.168.123.102:0/4072998289' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-06T23:37:35.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:35 vm02 bash[17013]: cluster 2026-03-06T22:37:34.436601+0000 mon.vm02 (mon.0) 37 : cluster [DBG] mgrmap e5: vm02.opvwec(active, since 3s) 2026-03-06T23:37:35.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:35 vm02 bash[17013]: cluster 2026-03-06T22:37:34.436601+0000 mon.vm02 (mon.0) 37 : cluster [DBG] mgrmap e5: vm02.opvwec(active, since 3s) 2026-03-06T23:37:35.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:35 vm02 bash[17013]: audit 2026-03-06T22:37:34.864939+0000 mon.vm02 (mon.0) 38 : audit [DBG] from='client.? 192.168.123.102:0/4104584947' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-06T23:37:35.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:35 vm02 bash[17013]: audit 2026-03-06T22:37:34.864939+0000 mon.vm02 (mon.0) 38 : audit [DBG] from='client.? 192.168.123.102:0/4104584947' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-06T23:37:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:44 vm02 bash[17013]: cluster 2026-03-06T22:37:44.254082+0000 mon.vm02 (mon.0) 39 : cluster [INF] Active manager daemon vm02.opvwec restarted 2026-03-06T23:37:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:44 vm02 bash[17013]: cluster 2026-03-06T22:37:44.254082+0000 mon.vm02 (mon.0) 39 : cluster [INF] Active manager daemon vm02.opvwec restarted 2026-03-06T23:37:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:44 vm02 bash[17013]: cluster 2026-03-06T22:37:44.254481+0000 mon.vm02 (mon.0) 40 : cluster [INF] Activating manager daemon vm02.opvwec 2026-03-06T23:37:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:44 vm02 bash[17013]: cluster 2026-03-06T22:37:44.254481+0000 mon.vm02 (mon.0) 40 : cluster [INF] Activating manager daemon vm02.opvwec 2026-03-06T23:37:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:44 vm02 bash[17013]: cluster 2026-03-06T22:37:44.262067+0000 mon.vm02 (mon.0) 41 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-06T23:37:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:44 vm02 bash[17013]: cluster 2026-03-06T22:37:44.262067+0000 mon.vm02 (mon.0) 41 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-06T23:37:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:44 vm02 bash[17013]: cluster 2026-03-06T22:37:44.262244+0000 mon.vm02 (mon.0) 42 : cluster [DBG] mgrmap e6: vm02.opvwec(active, starting, since 0.00785025s) 2026-03-06T23:37:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:44 vm02 bash[17013]: cluster 2026-03-06T22:37:44.262244+0000 mon.vm02 (mon.0) 42 : cluster [DBG] mgrmap e6: vm02.opvwec(active, starting, since 0.00785025s) 2026-03-06T23:37:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:44 vm02 bash[17013]: audit 2026-03-06T22:37:44.265086+0000 mon.vm02 (mon.0) 43 : audit [DBG] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm02"}]: dispatch 2026-03-06T23:37:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:44 vm02 bash[17013]: audit 2026-03-06T22:37:44.265086+0000 mon.vm02 (mon.0) 43 : audit [DBG] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm02"}]: dispatch 2026-03-06T23:37:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:44 vm02 bash[17013]: audit 2026-03-06T22:37:44.265187+0000 mon.vm02 (mon.0) 44 : audit [DBG] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mgr metadata", "who": "vm02.opvwec", "id": "vm02.opvwec"}]: dispatch 2026-03-06T23:37:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:44 vm02 bash[17013]: audit 2026-03-06T22:37:44.265187+0000 mon.vm02 (mon.0) 44 : audit [DBG] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mgr metadata", "who": "vm02.opvwec", "id": "vm02.opvwec"}]: dispatch 2026-03-06T23:37:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:44 vm02 bash[17013]: audit 2026-03-06T22:37:44.265797+0000 mon.vm02 (mon.0) 45 : audit [DBG] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-06T23:37:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:44 vm02 bash[17013]: audit 2026-03-06T22:37:44.265797+0000 mon.vm02 (mon.0) 45 : audit [DBG] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-06T23:37:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:44 vm02 bash[17013]: audit 2026-03-06T22:37:44.265919+0000 mon.vm02 (mon.0) 46 : audit [DBG] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-06T23:37:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:44 vm02 bash[17013]: audit 2026-03-06T22:37:44.265919+0000 mon.vm02 (mon.0) 46 : audit [DBG] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-06T23:37:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:44 vm02 bash[17013]: audit 2026-03-06T22:37:44.265990+0000 mon.vm02 (mon.0) 47 : audit [DBG] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-06T23:37:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:44 vm02 bash[17013]: audit 2026-03-06T22:37:44.265990+0000 mon.vm02 (mon.0) 47 : audit [DBG] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-06T23:37:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:44 vm02 bash[17013]: cluster 2026-03-06T22:37:44.271051+0000 mon.vm02 (mon.0) 48 : cluster [INF] Manager daemon vm02.opvwec is now available 2026-03-06T23:37:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:44 vm02 bash[17013]: cluster 2026-03-06T22:37:44.271051+0000 mon.vm02 (mon.0) 48 : cluster [INF] Manager daemon vm02.opvwec is now available 2026-03-06T23:37:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:44 vm02 bash[17013]: audit 2026-03-06T22:37:44.285397+0000 mon.vm02 (mon.0) 49 : audit [INF] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' 2026-03-06T23:37:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:44 vm02 bash[17013]: audit 2026-03-06T22:37:44.285397+0000 mon.vm02 (mon.0) 49 : audit [INF] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' 2026-03-06T23:37:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:44 vm02 bash[17013]: audit 2026-03-06T22:37:44.289460+0000 mon.vm02 (mon.0) 50 : audit [INF] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' 2026-03-06T23:37:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:44 vm02 bash[17013]: audit 2026-03-06T22:37:44.289460+0000 mon.vm02 (mon.0) 50 : audit [INF] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' 2026-03-06T23:37:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:44 vm02 bash[17013]: audit 2026-03-06T22:37:44.301397+0000 mon.vm02 (mon.0) 51 : audit [DBG] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:37:44.744 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:44 vm02 bash[17013]: audit 2026-03-06T22:37:44.301397+0000 mon.vm02 (mon.0) 51 : audit [DBG] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:37:44.744 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:44 vm02 bash[17013]: audit 2026-03-06T22:37:44.301613+0000 mon.vm02 (mon.0) 52 : audit [INF] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm02.opvwec/mirror_snapshot_schedule"}]: dispatch 2026-03-06T23:37:44.744 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:44 vm02 bash[17013]: audit 2026-03-06T22:37:44.301613+0000 mon.vm02 (mon.0) 52 : audit [INF] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm02.opvwec/mirror_snapshot_schedule"}]: dispatch 2026-03-06T23:37:44.744 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:44 vm02 bash[17013]: audit 2026-03-06T22:37:44.303380+0000 mon.vm02 (mon.0) 53 : audit [DBG] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:37:44.744 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:44 vm02 bash[17013]: audit 2026-03-06T22:37:44.303380+0000 mon.vm02 (mon.0) 53 : audit [DBG] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:37:45.332 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout { 2026-03-06T23:37:45.333 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 7, 2026-03-06T23:37:45.333 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-06T23:37:45.333 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout } 2026-03-06T23:37:45.333 INFO:teuthology.orchestra.run.vm02.stdout:mgr epoch 5 is available 2026-03-06T23:37:45.333 INFO:teuthology.orchestra.run.vm02.stdout:Setting orchestrator backend to cephadm... 2026-03-06T23:37:45.710 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:45 vm02 bash[17013]: cephadm 2026-03-06T22:37:44.282662+0000 mgr.vm02.opvwec (mgr.14124) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-06T23:37:45.710 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:45 vm02 bash[17013]: cephadm 2026-03-06T22:37:44.282662+0000 mgr.vm02.opvwec (mgr.14124) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-06T23:37:45.710 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:45 vm02 bash[17013]: audit 2026-03-06T22:37:44.322306+0000 mon.vm02 (mon.0) 54 : audit [INF] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm02.opvwec/trash_purge_schedule"}]: dispatch 2026-03-06T23:37:45.710 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:45 vm02 bash[17013]: audit 2026-03-06T22:37:44.322306+0000 mon.vm02 (mon.0) 54 : audit [INF] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm02.opvwec/trash_purge_schedule"}]: dispatch 2026-03-06T23:37:45.710 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:45 vm02 bash[17013]: audit 2026-03-06T22:37:44.625420+0000 mon.vm02 (mon.0) 55 : audit [INF] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' 2026-03-06T23:37:45.710 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:45 vm02 bash[17013]: audit 2026-03-06T22:37:44.625420+0000 mon.vm02 (mon.0) 55 : audit [INF] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' 2026-03-06T23:37:45.710 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:45 vm02 bash[17013]: audit 2026-03-06T22:37:44.628737+0000 mon.vm02 (mon.0) 56 : audit [INF] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' 2026-03-06T23:37:45.710 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:45 vm02 bash[17013]: audit 2026-03-06T22:37:44.628737+0000 mon.vm02 (mon.0) 56 : audit [INF] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' 2026-03-06T23:37:45.710 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:45 vm02 bash[17013]: cluster 2026-03-06T22:37:45.265956+0000 mon.vm02 (mon.0) 57 : cluster [DBG] mgrmap e7: vm02.opvwec(active, since 1.01156s) 2026-03-06T23:37:45.710 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:45 vm02 bash[17013]: cluster 2026-03-06T22:37:45.265956+0000 mon.vm02 (mon.0) 57 : cluster [DBG] mgrmap e7: vm02.opvwec(active, since 1.01156s) 2026-03-06T23:37:46.116 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout value unchanged 2026-03-06T23:37:46.116 INFO:teuthology.orchestra.run.vm02.stdout:Generating ssh key... 2026-03-06T23:37:46.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:46 vm02 bash[17013]: audit 2026-03-06T22:37:45.267006+0000 mgr.vm02.opvwec (mgr.14124) 2 : audit [DBG] from='client.14128 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-06T23:37:46.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:46 vm02 bash[17013]: audit 2026-03-06T22:37:45.267006+0000 mgr.vm02.opvwec (mgr.14124) 2 : audit [DBG] from='client.14128 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-06T23:37:46.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:46 vm02 bash[17013]: audit 2026-03-06T22:37:45.273189+0000 mgr.vm02.opvwec (mgr.14124) 3 : audit [DBG] from='client.14128 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-06T23:37:46.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:46 vm02 bash[17013]: audit 2026-03-06T22:37:45.273189+0000 mgr.vm02.opvwec (mgr.14124) 3 : audit [DBG] from='client.14128 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-06T23:37:46.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:46 vm02 bash[17013]: cephadm 2026-03-06T22:37:45.415908+0000 mgr.vm02.opvwec (mgr.14124) 4 : cephadm [INF] [06/Mar/2026:22:37:45] ENGINE Bus STARTING 2026-03-06T23:37:46.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:46 vm02 bash[17013]: cephadm 2026-03-06T22:37:45.415908+0000 mgr.vm02.opvwec (mgr.14124) 4 : cephadm [INF] [06/Mar/2026:22:37:45] ENGINE Bus STARTING 2026-03-06T23:37:46.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:46 vm02 bash[17013]: audit 2026-03-06T22:37:45.630566+0000 mon.vm02 (mon.0) 58 : audit [DBG] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:37:46.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:46 vm02 bash[17013]: audit 2026-03-06T22:37:45.630566+0000 mon.vm02 (mon.0) 58 : audit [DBG] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:37:46.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:46 vm02 bash[17013]: audit 2026-03-06T22:37:45.688429+0000 mon.vm02 (mon.0) 59 : audit [INF] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' 2026-03-06T23:37:46.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:46 vm02 bash[17013]: audit 2026-03-06T22:37:45.688429+0000 mon.vm02 (mon.0) 59 : audit [INF] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' 2026-03-06T23:37:46.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:46 vm02 bash[17013]: audit 2026-03-06T22:37:45.694492+0000 mon.vm02 (mon.0) 60 : audit [DBG] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:37:46.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:46 vm02 bash[17013]: audit 2026-03-06T22:37:45.694492+0000 mon.vm02 (mon.0) 60 : audit [DBG] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:37:47.074 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKaHx0dhdX600/GTtSlC2BTsHil8Xtljqc0QMFr2vvl+nNwDiq/pkhvAudzaFoom8D4EwW9QzUBuZlny8sZe+XZoHha0SDPxU5WJ2gfXveZvIyw3ep/y1tI4ycFyZKdjLa9LTnOW1imgRbZJSmaDLAnRjRtj8FwOaXJBYBH+IJxO4ZwRUdlWRnqObn+I1dKRYlXJXTa97GhJ4CPaJtUYo8cVqBkz7L2HpEXmt5bEcHLASVUbhH6oaoCHa98fFbCzOSxRhekGzrNoLm8fmGJhsdgZktYOrQ6mg0cFGD8ICsiiGtnaouay9cLRq87j/X07kPYTnfIvHWo3IDKq0iRynTo1VVbk5flMzA/wV9uN51lmLq7zL7foq8RrBj/JLlgOEfTMpiXQHF2FpXHDJUSG+EBEVd+u2zjxYBt7uwU1y2Hp2AU7kGK/t64YSj8W931d64K6hXBp4Kf4DP1wwHu2oRLfeOVBuJpbWrD9MpqLTmqbw6iAGx8hWVsFGDj9B46ic= ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44 2026-03-06T23:37:47.074 INFO:teuthology.orchestra.run.vm02.stdout:Wrote public SSH key to /home/ubuntu/cephtest/ceph.pub 2026-03-06T23:37:47.074 INFO:teuthology.orchestra.run.vm02.stdout:Adding key to root@localhost authorized_keys... 2026-03-06T23:37:47.074 INFO:teuthology.orchestra.run.vm02.stdout:Adding host vm02... 2026-03-06T23:37:47.579 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:47 vm02 bash[17013]: cephadm 2026-03-06T22:37:45.518247+0000 mgr.vm02.opvwec (mgr.14124) 5 : cephadm [INF] [06/Mar/2026:22:37:45] ENGINE Serving on http://192.168.123.102:8765 2026-03-06T23:37:47.579 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:47 vm02 bash[17013]: cephadm 2026-03-06T22:37:45.518247+0000 mgr.vm02.opvwec (mgr.14124) 5 : cephadm [INF] [06/Mar/2026:22:37:45] ENGINE Serving on http://192.168.123.102:8765 2026-03-06T23:37:47.579 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:47 vm02 bash[17013]: cephadm 2026-03-06T22:37:45.630033+0000 mgr.vm02.opvwec (mgr.14124) 6 : cephadm [INF] [06/Mar/2026:22:37:45] ENGINE Serving on https://192.168.123.102:7150 2026-03-06T23:37:47.579 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:47 vm02 bash[17013]: cephadm 2026-03-06T22:37:45.630033+0000 mgr.vm02.opvwec (mgr.14124) 6 : cephadm [INF] [06/Mar/2026:22:37:45] ENGINE Serving on https://192.168.123.102:7150 2026-03-06T23:37:47.579 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:47 vm02 bash[17013]: cephadm 2026-03-06T22:37:45.630075+0000 mgr.vm02.opvwec (mgr.14124) 7 : cephadm [INF] [06/Mar/2026:22:37:45] ENGINE Bus STARTED 2026-03-06T23:37:47.579 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:47 vm02 bash[17013]: cephadm 2026-03-06T22:37:45.630075+0000 mgr.vm02.opvwec (mgr.14124) 7 : cephadm [INF] [06/Mar/2026:22:37:45] ENGINE Bus STARTED 2026-03-06T23:37:47.579 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:47 vm02 bash[17013]: cephadm 2026-03-06T22:37:45.630571+0000 mgr.vm02.opvwec (mgr.14124) 8 : cephadm [INF] [06/Mar/2026:22:37:45] ENGINE Client ('192.168.123.102', 34132) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-06T23:37:47.579 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:47 vm02 bash[17013]: cephadm 2026-03-06T22:37:45.630571+0000 mgr.vm02.opvwec (mgr.14124) 8 : cephadm [INF] [06/Mar/2026:22:37:45] ENGINE Client ('192.168.123.102', 34132) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-06T23:37:47.579 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:47 vm02 bash[17013]: audit 2026-03-06T22:37:45.684277+0000 mgr.vm02.opvwec (mgr.14124) 9 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:37:47.579 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:47 vm02 bash[17013]: audit 2026-03-06T22:37:45.684277+0000 mgr.vm02.opvwec (mgr.14124) 9 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:37:47.579 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:47 vm02 bash[17013]: audit 2026-03-06T22:37:46.069359+0000 mgr.vm02.opvwec (mgr.14124) 10 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:37:47.579 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:47 vm02 bash[17013]: audit 2026-03-06T22:37:46.069359+0000 mgr.vm02.opvwec (mgr.14124) 10 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:37:47.579 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:47 vm02 bash[17013]: audit 2026-03-06T22:37:46.427292+0000 mgr.vm02.opvwec (mgr.14124) 11 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:37:47.579 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:47 vm02 bash[17013]: audit 2026-03-06T22:37:46.427292+0000 mgr.vm02.opvwec (mgr.14124) 11 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:37:47.580 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:47 vm02 bash[17013]: cephadm 2026-03-06T22:37:46.427507+0000 mgr.vm02.opvwec (mgr.14124) 12 : cephadm [INF] Generating ssh key... 2026-03-06T23:37:47.580 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:47 vm02 bash[17013]: cephadm 2026-03-06T22:37:46.427507+0000 mgr.vm02.opvwec (mgr.14124) 12 : cephadm [INF] Generating ssh key... 2026-03-06T23:37:47.580 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:47 vm02 bash[17013]: audit 2026-03-06T22:37:46.636778+0000 mon.vm02 (mon.0) 61 : audit [INF] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' 2026-03-06T23:37:47.580 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:47 vm02 bash[17013]: audit 2026-03-06T22:37:46.636778+0000 mon.vm02 (mon.0) 61 : audit [INF] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' 2026-03-06T23:37:47.580 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:47 vm02 bash[17013]: audit 2026-03-06T22:37:46.639185+0000 mon.vm02 (mon.0) 62 : audit [INF] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' 2026-03-06T23:37:47.580 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:47 vm02 bash[17013]: audit 2026-03-06T22:37:46.639185+0000 mon.vm02 (mon.0) 62 : audit [INF] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' 2026-03-06T23:37:47.580 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:47 vm02 bash[17013]: cluster 2026-03-06T22:37:46.691801+0000 mon.vm02 (mon.0) 63 : cluster [DBG] mgrmap e8: vm02.opvwec(active, since 2s) 2026-03-06T23:37:47.580 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:47 vm02 bash[17013]: cluster 2026-03-06T22:37:46.691801+0000 mon.vm02 (mon.0) 63 : cluster [DBG] mgrmap e8: vm02.opvwec(active, since 2s) 2026-03-06T23:37:48.321 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:48 vm02 bash[17013]: audit 2026-03-06T22:37:47.027843+0000 mgr.vm02.opvwec (mgr.14124) 13 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:37:48.321 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:48 vm02 bash[17013]: audit 2026-03-06T22:37:47.027843+0000 mgr.vm02.opvwec (mgr.14124) 13 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:37:48.588 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:48 vm02 bash[17013]: audit 2026-03-06T22:37:47.398907+0000 mgr.vm02.opvwec (mgr.14124) 14 : audit [DBG] from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm02", "addr": "192.168.123.102", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:37:48.588 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:48 vm02 bash[17013]: audit 2026-03-06T22:37:47.398907+0000 mgr.vm02.opvwec (mgr.14124) 14 : audit [DBG] from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm02", "addr": "192.168.123.102", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:37:49.573 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:49 vm02 bash[17013]: cephadm 2026-03-06T22:37:48.305968+0000 mgr.vm02.opvwec (mgr.14124) 15 : cephadm [INF] Deploying cephadm binary to vm02 2026-03-06T23:37:49.573 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:49 vm02 bash[17013]: cephadm 2026-03-06T22:37:48.305968+0000 mgr.vm02.opvwec (mgr.14124) 15 : cephadm [INF] Deploying cephadm binary to vm02 2026-03-06T23:37:50.159 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout Added host 'vm02' with addr '192.168.123.102' 2026-03-06T23:37:50.159 INFO:teuthology.orchestra.run.vm02.stdout:Deploying mon service with default placement... 2026-03-06T23:37:50.540 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout Scheduled mon update... 2026-03-06T23:37:50.540 INFO:teuthology.orchestra.run.vm02.stdout:Deploying mgr service with default placement... 2026-03-06T23:37:50.916 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout Scheduled mgr update... 2026-03-06T23:37:50.916 INFO:teuthology.orchestra.run.vm02.stdout:Deploying crash service with default placement... 2026-03-06T23:37:51.198 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:51 vm02 bash[17013]: audit 2026-03-06T22:37:50.095038+0000 mon.vm02 (mon.0) 64 : audit [INF] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' 2026-03-06T23:37:51.198 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:51 vm02 bash[17013]: audit 2026-03-06T22:37:50.095038+0000 mon.vm02 (mon.0) 64 : audit [INF] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' 2026-03-06T23:37:51.198 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:51 vm02 bash[17013]: cephadm 2026-03-06T22:37:50.095872+0000 mgr.vm02.opvwec (mgr.14124) 16 : cephadm [INF] Added host vm02 2026-03-06T23:37:51.198 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:51 vm02 bash[17013]: cephadm 2026-03-06T22:37:50.095872+0000 mgr.vm02.opvwec (mgr.14124) 16 : cephadm [INF] Added host vm02 2026-03-06T23:37:51.198 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:51 vm02 bash[17013]: audit 2026-03-06T22:37:50.096189+0000 mon.vm02 (mon.0) 65 : audit [DBG] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:37:51.198 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:51 vm02 bash[17013]: audit 2026-03-06T22:37:50.096189+0000 mon.vm02 (mon.0) 65 : audit [DBG] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:37:51.198 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:51 vm02 bash[17013]: audit 2026-03-06T22:37:50.492055+0000 mon.vm02 (mon.0) 66 : audit [INF] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' 2026-03-06T23:37:51.198 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:51 vm02 bash[17013]: audit 2026-03-06T22:37:50.492055+0000 mon.vm02 (mon.0) 66 : audit [INF] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' 2026-03-06T23:37:51.198 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:51 vm02 bash[17013]: audit 2026-03-06T22:37:50.868640+0000 mon.vm02 (mon.0) 67 : audit [INF] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' 2026-03-06T23:37:51.198 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:51 vm02 bash[17013]: audit 2026-03-06T22:37:50.868640+0000 mon.vm02 (mon.0) 67 : audit [INF] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' 2026-03-06T23:37:51.374 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout Scheduled crash update... 2026-03-06T23:37:51.374 INFO:teuthology.orchestra.run.vm02.stdout:Deploying ceph-exporter service with default placement... 2026-03-06T23:37:51.865 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout Scheduled ceph-exporter update... 2026-03-06T23:37:51.865 INFO:teuthology.orchestra.run.vm02.stdout:Deploying prometheus service with default placement... 2026-03-06T23:37:52.257 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout Scheduled prometheus update... 2026-03-06T23:37:52.257 INFO:teuthology.orchestra.run.vm02.stdout:Deploying grafana service with default placement... 2026-03-06T23:37:52.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:52 vm02 bash[17013]: audit 2026-03-06T22:37:50.486471+0000 mgr.vm02.opvwec (mgr.14124) 17 : audit [DBG] from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:37:52.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:52 vm02 bash[17013]: audit 2026-03-06T22:37:50.486471+0000 mgr.vm02.opvwec (mgr.14124) 17 : audit [DBG] from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:37:52.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:52 vm02 bash[17013]: cephadm 2026-03-06T22:37:50.487285+0000 mgr.vm02.opvwec (mgr.14124) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-06T23:37:52.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:52 vm02 bash[17013]: cephadm 2026-03-06T22:37:50.487285+0000 mgr.vm02.opvwec (mgr.14124) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-06T23:37:52.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:52 vm02 bash[17013]: audit 2026-03-06T22:37:50.864739+0000 mgr.vm02.opvwec (mgr.14124) 19 : audit [DBG] from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:37:52.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:52 vm02 bash[17013]: audit 2026-03-06T22:37:50.864739+0000 mgr.vm02.opvwec (mgr.14124) 19 : audit [DBG] from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:37:52.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:52 vm02 bash[17013]: cephadm 2026-03-06T22:37:50.865433+0000 mgr.vm02.opvwec (mgr.14124) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-06T23:37:52.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:52 vm02 bash[17013]: cephadm 2026-03-06T22:37:50.865433+0000 mgr.vm02.opvwec (mgr.14124) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-06T23:37:52.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:52 vm02 bash[17013]: audit 2026-03-06T22:37:51.264690+0000 mon.vm02 (mon.0) 68 : audit [INF] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' 2026-03-06T23:37:52.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:52 vm02 bash[17013]: audit 2026-03-06T22:37:51.264690+0000 mon.vm02 (mon.0) 68 : audit [INF] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' 2026-03-06T23:37:52.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:52 vm02 bash[17013]: audit 2026-03-06T22:37:51.309812+0000 mgr.vm02.opvwec (mgr.14124) 21 : audit [DBG] from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:37:52.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:52 vm02 bash[17013]: audit 2026-03-06T22:37:51.309812+0000 mgr.vm02.opvwec (mgr.14124) 21 : audit [DBG] from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:37:52.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:52 vm02 bash[17013]: cephadm 2026-03-06T22:37:51.310604+0000 mgr.vm02.opvwec (mgr.14124) 22 : cephadm [INF] Saving service crash spec with placement * 2026-03-06T23:37:52.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:52 vm02 bash[17013]: cephadm 2026-03-06T22:37:51.310604+0000 mgr.vm02.opvwec (mgr.14124) 22 : cephadm [INF] Saving service crash spec with placement * 2026-03-06T23:37:52.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:52 vm02 bash[17013]: audit 2026-03-06T22:37:51.314058+0000 mon.vm02 (mon.0) 69 : audit [INF] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' 2026-03-06T23:37:52.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:52 vm02 bash[17013]: audit 2026-03-06T22:37:51.314058+0000 mon.vm02 (mon.0) 69 : audit [INF] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' 2026-03-06T23:37:52.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:52 vm02 bash[17013]: audit 2026-03-06T22:37:51.624349+0000 mon.vm02 (mon.0) 70 : audit [INF] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' 2026-03-06T23:37:52.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:52 vm02 bash[17013]: audit 2026-03-06T22:37:51.624349+0000 mon.vm02 (mon.0) 70 : audit [INF] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' 2026-03-06T23:37:52.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:52 vm02 bash[17013]: audit 2026-03-06T22:37:51.790843+0000 mon.vm02 (mon.0) 71 : audit [INF] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' 2026-03-06T23:37:52.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:52 vm02 bash[17013]: audit 2026-03-06T22:37:51.790843+0000 mon.vm02 (mon.0) 71 : audit [INF] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' 2026-03-06T23:37:52.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:52 vm02 bash[17013]: audit 2026-03-06T22:37:52.209146+0000 mon.vm02 (mon.0) 72 : audit [INF] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' 2026-03-06T23:37:52.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:52 vm02 bash[17013]: audit 2026-03-06T22:37:52.209146+0000 mon.vm02 (mon.0) 72 : audit [INF] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' 2026-03-06T23:37:52.635 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout Scheduled grafana update... 2026-03-06T23:37:52.636 INFO:teuthology.orchestra.run.vm02.stdout:Deploying node-exporter service with default placement... 2026-03-06T23:37:53.008 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout Scheduled node-exporter update... 2026-03-06T23:37:53.008 INFO:teuthology.orchestra.run.vm02.stdout:Deploying alertmanager service with default placement... 2026-03-06T23:37:53.379 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout Scheduled alertmanager update... 2026-03-06T23:37:53.720 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:53 vm02 bash[17013]: audit 2026-03-06T22:37:51.781543+0000 mgr.vm02.opvwec (mgr.14124) 23 : audit [DBG] from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "ceph-exporter", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:37:53.720 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:53 vm02 bash[17013]: audit 2026-03-06T22:37:51.781543+0000 mgr.vm02.opvwec (mgr.14124) 23 : audit [DBG] from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "ceph-exporter", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:37:53.720 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:53 vm02 bash[17013]: cephadm 2026-03-06T22:37:51.782343+0000 mgr.vm02.opvwec (mgr.14124) 24 : cephadm [INF] Saving service ceph-exporter spec with placement * 2026-03-06T23:37:53.721 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:53 vm02 bash[17013]: cephadm 2026-03-06T22:37:51.782343+0000 mgr.vm02.opvwec (mgr.14124) 24 : cephadm [INF] Saving service ceph-exporter spec with placement * 2026-03-06T23:37:53.721 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:53 vm02 bash[17013]: audit 2026-03-06T22:37:52.205202+0000 mgr.vm02.opvwec (mgr.14124) 25 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:37:53.721 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:53 vm02 bash[17013]: audit 2026-03-06T22:37:52.205202+0000 mgr.vm02.opvwec (mgr.14124) 25 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:37:53.721 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:53 vm02 bash[17013]: cephadm 2026-03-06T22:37:52.205843+0000 mgr.vm02.opvwec (mgr.14124) 26 : cephadm [INF] Saving service prometheus spec with placement count:1 2026-03-06T23:37:53.721 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:53 vm02 bash[17013]: cephadm 2026-03-06T22:37:52.205843+0000 mgr.vm02.opvwec (mgr.14124) 26 : cephadm [INF] Saving service prometheus spec with placement count:1 2026-03-06T23:37:53.721 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:53 vm02 bash[17013]: audit 2026-03-06T22:37:52.585158+0000 mon.vm02 (mon.0) 73 : audit [INF] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' 2026-03-06T23:37:53.721 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:53 vm02 bash[17013]: audit 2026-03-06T22:37:52.585158+0000 mon.vm02 (mon.0) 73 : audit [INF] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' 2026-03-06T23:37:53.721 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:53 vm02 bash[17013]: audit 2026-03-06T22:37:52.959686+0000 mon.vm02 (mon.0) 74 : audit [INF] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' 2026-03-06T23:37:53.721 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:53 vm02 bash[17013]: audit 2026-03-06T22:37:52.959686+0000 mon.vm02 (mon.0) 74 : audit [INF] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' 2026-03-06T23:37:54.138 INFO:teuthology.orchestra.run.vm02.stdout:Enabling the dashboard module... 2026-03-06T23:37:54.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:54 vm02 bash[17013]: audit 2026-03-06T22:37:52.580859+0000 mgr.vm02.opvwec (mgr.14124) 27 : audit [DBG] from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:37:54.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:54 vm02 bash[17013]: audit 2026-03-06T22:37:52.580859+0000 mgr.vm02.opvwec (mgr.14124) 27 : audit [DBG] from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:37:54.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:54 vm02 bash[17013]: cephadm 2026-03-06T22:37:52.581765+0000 mgr.vm02.opvwec (mgr.14124) 28 : cephadm [INF] Saving service grafana spec with placement count:1 2026-03-06T23:37:54.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:54 vm02 bash[17013]: cephadm 2026-03-06T22:37:52.581765+0000 mgr.vm02.opvwec (mgr.14124) 28 : cephadm [INF] Saving service grafana spec with placement count:1 2026-03-06T23:37:54.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:54 vm02 bash[17013]: audit 2026-03-06T22:37:52.955437+0000 mgr.vm02.opvwec (mgr.14124) 29 : audit [DBG] from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:37:54.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:54 vm02 bash[17013]: audit 2026-03-06T22:37:52.955437+0000 mgr.vm02.opvwec (mgr.14124) 29 : audit [DBG] from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:37:54.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:54 vm02 bash[17013]: cephadm 2026-03-06T22:37:52.956105+0000 mgr.vm02.opvwec (mgr.14124) 30 : cephadm [INF] Saving service node-exporter spec with placement * 2026-03-06T23:37:54.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:54 vm02 bash[17013]: cephadm 2026-03-06T22:37:52.956105+0000 mgr.vm02.opvwec (mgr.14124) 30 : cephadm [INF] Saving service node-exporter spec with placement * 2026-03-06T23:37:54.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:54 vm02 bash[17013]: audit 2026-03-06T22:37:53.327545+0000 mgr.vm02.opvwec (mgr.14124) 31 : audit [DBG] from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:37:54.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:54 vm02 bash[17013]: audit 2026-03-06T22:37:53.327545+0000 mgr.vm02.opvwec (mgr.14124) 31 : audit [DBG] from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:37:54.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:54 vm02 bash[17013]: cephadm 2026-03-06T22:37:53.328204+0000 mgr.vm02.opvwec (mgr.14124) 32 : cephadm [INF] Saving service alertmanager spec with placement count:1 2026-03-06T23:37:54.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:54 vm02 bash[17013]: cephadm 2026-03-06T22:37:53.328204+0000 mgr.vm02.opvwec (mgr.14124) 32 : cephadm [INF] Saving service alertmanager spec with placement count:1 2026-03-06T23:37:54.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:54 vm02 bash[17013]: audit 2026-03-06T22:37:53.331002+0000 mon.vm02 (mon.0) 75 : audit [INF] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' 2026-03-06T23:37:54.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:54 vm02 bash[17013]: audit 2026-03-06T22:37:53.331002+0000 mon.vm02 (mon.0) 75 : audit [INF] from='mgr.14124 192.168.123.102:0/2585236670' entity='mgr.vm02.opvwec' 2026-03-06T23:37:54.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:54 vm02 bash[17013]: audit 2026-03-06T22:37:53.700711+0000 mon.vm02 (mon.0) 76 : audit [INF] from='client.? 192.168.123.102:0/3616673512' entity='client.admin' 2026-03-06T23:37:54.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:54 vm02 bash[17013]: audit 2026-03-06T22:37:53.700711+0000 mon.vm02 (mon.0) 76 : audit [INF] from='client.? 192.168.123.102:0/3616673512' entity='client.admin' 2026-03-06T23:37:54.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:54 vm02 bash[17013]: audit 2026-03-06T22:37:54.082824+0000 mon.vm02 (mon.0) 77 : audit [INF] from='client.? 192.168.123.102:0/3254762411' entity='client.admin' 2026-03-06T23:37:54.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:54 vm02 bash[17013]: audit 2026-03-06T22:37:54.082824+0000 mon.vm02 (mon.0) 77 : audit [INF] from='client.? 192.168.123.102:0/3254762411' entity='client.admin' 2026-03-06T23:37:55.663 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:55 vm02 bash[17013]: audit 2026-03-06T22:37:54.477642+0000 mon.vm02 (mon.0) 78 : audit [INF] from='client.? 192.168.123.102:0/4024407399' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-06T23:37:55.664 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:55 vm02 bash[17013]: audit 2026-03-06T22:37:54.477642+0000 mon.vm02 (mon.0) 78 : audit [INF] from='client.? 192.168.123.102:0/4024407399' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-06T23:37:55.836 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout { 2026-03-06T23:37:55.836 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 9, 2026-03-06T23:37:55.836 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-06T23:37:55.836 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "active_name": "vm02.opvwec", 2026-03-06T23:37:55.836 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-06T23:37:55.836 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout } 2026-03-06T23:37:55.836 INFO:teuthology.orchestra.run.vm02.stdout:Waiting for the mgr to restart... 2026-03-06T23:37:55.836 INFO:teuthology.orchestra.run.vm02.stdout:Waiting for mgr epoch 9... 2026-03-06T23:37:56.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:56 vm02 bash[17013]: audit 2026-03-06T22:37:55.337975+0000 mon.vm02 (mon.0) 79 : audit [INF] from='client.? 192.168.123.102:0/4024407399' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-06T23:37:56.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:56 vm02 bash[17013]: audit 2026-03-06T22:37:55.337975+0000 mon.vm02 (mon.0) 79 : audit [INF] from='client.? 192.168.123.102:0/4024407399' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-06T23:37:56.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:56 vm02 bash[17013]: cluster 2026-03-06T22:37:55.340333+0000 mon.vm02 (mon.0) 80 : cluster [DBG] mgrmap e9: vm02.opvwec(active, since 11s) 2026-03-06T23:37:56.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:56 vm02 bash[17013]: cluster 2026-03-06T22:37:55.340333+0000 mon.vm02 (mon.0) 80 : cluster [DBG] mgrmap e9: vm02.opvwec(active, since 11s) 2026-03-06T23:37:56.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:56 vm02 bash[17013]: audit 2026-03-06T22:37:55.779755+0000 mon.vm02 (mon.0) 81 : audit [DBG] from='client.? 192.168.123.102:0/532473986' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-06T23:37:56.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:37:56 vm02 bash[17013]: audit 2026-03-06T22:37:55.779755+0000 mon.vm02 (mon.0) 81 : audit [DBG] from='client.? 192.168.123.102:0/532473986' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-06T23:38:05.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:05 vm02 bash[17013]: cluster 2026-03-06T22:38:05.493354+0000 mon.vm02 (mon.0) 82 : cluster [INF] Active manager daemon vm02.opvwec restarted 2026-03-06T23:38:05.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:05 vm02 bash[17013]: cluster 2026-03-06T22:38:05.493354+0000 mon.vm02 (mon.0) 82 : cluster [INF] Active manager daemon vm02.opvwec restarted 2026-03-06T23:38:05.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:05 vm02 bash[17013]: cluster 2026-03-06T22:38:05.493571+0000 mon.vm02 (mon.0) 83 : cluster [INF] Activating manager daemon vm02.opvwec 2026-03-06T23:38:05.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:05 vm02 bash[17013]: cluster 2026-03-06T22:38:05.493571+0000 mon.vm02 (mon.0) 83 : cluster [INF] Activating manager daemon vm02.opvwec 2026-03-06T23:38:05.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:05 vm02 bash[17013]: cluster 2026-03-06T22:38:05.498226+0000 mon.vm02 (mon.0) 84 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-06T23:38:05.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:05 vm02 bash[17013]: cluster 2026-03-06T22:38:05.498226+0000 mon.vm02 (mon.0) 84 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-06T23:38:05.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:05 vm02 bash[17013]: cluster 2026-03-06T22:38:05.498314+0000 mon.vm02 (mon.0) 85 : cluster [DBG] mgrmap e10: vm02.opvwec(active, starting, since 0.00485371s) 2026-03-06T23:38:05.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:05 vm02 bash[17013]: cluster 2026-03-06T22:38:05.498314+0000 mon.vm02 (mon.0) 85 : cluster [DBG] mgrmap e10: vm02.opvwec(active, starting, since 0.00485371s) 2026-03-06T23:38:05.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:05 vm02 bash[17013]: audit 2026-03-06T22:38:05.501050+0000 mon.vm02 (mon.0) 86 : audit [DBG] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm02"}]: dispatch 2026-03-06T23:38:05.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:05 vm02 bash[17013]: audit 2026-03-06T22:38:05.501050+0000 mon.vm02 (mon.0) 86 : audit [DBG] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm02"}]: dispatch 2026-03-06T23:38:05.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:05 vm02 bash[17013]: audit 2026-03-06T22:38:05.501134+0000 mon.vm02 (mon.0) 87 : audit [DBG] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mgr metadata", "who": "vm02.opvwec", "id": "vm02.opvwec"}]: dispatch 2026-03-06T23:38:05.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:05 vm02 bash[17013]: audit 2026-03-06T22:38:05.501134+0000 mon.vm02 (mon.0) 87 : audit [DBG] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mgr metadata", "who": "vm02.opvwec", "id": "vm02.opvwec"}]: dispatch 2026-03-06T23:38:05.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:05 vm02 bash[17013]: audit 2026-03-06T22:38:05.502147+0000 mon.vm02 (mon.0) 88 : audit [DBG] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-06T23:38:05.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:05 vm02 bash[17013]: audit 2026-03-06T22:38:05.502147+0000 mon.vm02 (mon.0) 88 : audit [DBG] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-06T23:38:05.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:05 vm02 bash[17013]: audit 2026-03-06T22:38:05.502226+0000 mon.vm02 (mon.0) 89 : audit [DBG] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-06T23:38:05.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:05 vm02 bash[17013]: audit 2026-03-06T22:38:05.502226+0000 mon.vm02 (mon.0) 89 : audit [DBG] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-06T23:38:05.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:05 vm02 bash[17013]: audit 2026-03-06T22:38:05.502300+0000 mon.vm02 (mon.0) 90 : audit [DBG] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-06T23:38:05.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:05 vm02 bash[17013]: audit 2026-03-06T22:38:05.502300+0000 mon.vm02 (mon.0) 90 : audit [DBG] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-06T23:38:05.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:05 vm02 bash[17013]: cluster 2026-03-06T22:38:05.508211+0000 mon.vm02 (mon.0) 91 : cluster [INF] Manager daemon vm02.opvwec is now available 2026-03-06T23:38:05.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:05 vm02 bash[17013]: cluster 2026-03-06T22:38:05.508211+0000 mon.vm02 (mon.0) 91 : cluster [INF] Manager daemon vm02.opvwec is now available 2026-03-06T23:38:05.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:05 vm02 bash[17013]: audit 2026-03-06T22:38:05.535150+0000 mon.vm02 (mon.0) 92 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm02.opvwec/mirror_snapshot_schedule"}]: dispatch 2026-03-06T23:38:05.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:05 vm02 bash[17013]: audit 2026-03-06T22:38:05.535150+0000 mon.vm02 (mon.0) 92 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm02.opvwec/mirror_snapshot_schedule"}]: dispatch 2026-03-06T23:38:05.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:05 vm02 bash[17013]: audit 2026-03-06T22:38:05.535768+0000 mon.vm02 (mon.0) 93 : audit [DBG] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:38:05.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:05 vm02 bash[17013]: audit 2026-03-06T22:38:05.535768+0000 mon.vm02 (mon.0) 93 : audit [DBG] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:38:06.560 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout { 2026-03-06T23:38:06.560 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 11, 2026-03-06T23:38:06.560 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-06T23:38:06.560 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout } 2026-03-06T23:38:06.560 INFO:teuthology.orchestra.run.vm02.stdout:mgr epoch 9 is available 2026-03-06T23:38:06.560 INFO:teuthology.orchestra.run.vm02.stdout:Generating a dashboard self-signed certificate... 2026-03-06T23:38:06.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:06 vm02 bash[17013]: audit 2026-03-06T22:38:05.554533+0000 mon.vm02 (mon.0) 94 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm02.opvwec/trash_purge_schedule"}]: dispatch 2026-03-06T23:38:06.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:06 vm02 bash[17013]: audit 2026-03-06T22:38:05.554533+0000 mon.vm02 (mon.0) 94 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm02.opvwec/trash_purge_schedule"}]: dispatch 2026-03-06T23:38:06.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:06 vm02 bash[17013]: cephadm 2026-03-06T22:38:06.365277+0000 mgr.vm02.opvwec (mgr.14168) 1 : cephadm [INF] [06/Mar/2026:22:38:06] ENGINE Bus STARTING 2026-03-06T23:38:06.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:06 vm02 bash[17013]: cephadm 2026-03-06T22:38:06.365277+0000 mgr.vm02.opvwec (mgr.14168) 1 : cephadm [INF] [06/Mar/2026:22:38:06] ENGINE Bus STARTING 2026-03-06T23:38:06.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:06 vm02 bash[17013]: cluster 2026-03-06T22:38:06.502571+0000 mon.vm02 (mon.0) 95 : cluster [DBG] mgrmap e11: vm02.opvwec(active, since 1.00911s) 2026-03-06T23:38:06.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:06 vm02 bash[17013]: cluster 2026-03-06T22:38:06.502571+0000 mon.vm02 (mon.0) 95 : cluster [DBG] mgrmap e11: vm02.opvwec(active, since 1.00911s) 2026-03-06T23:38:07.043 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout Self-signed certificate created 2026-03-06T23:38:07.044 INFO:teuthology.orchestra.run.vm02.stdout:Creating initial admin user... 2026-03-06T23:38:07.567 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout {"username": "admin", "password": "$2b$12$g.eMgjHpbymfA7Q48fAOQOVtOmLdvY4WPvivCt./hdVvIJ/BuMp8.", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1772836687, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": true} 2026-03-06T23:38:07.567 INFO:teuthology.orchestra.run.vm02.stdout:Fetching dashboard port number... 2026-03-06T23:38:07.943 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 8443 2026-03-06T23:38:07.943 INFO:teuthology.orchestra.run.vm02.stdout:firewalld does not appear to be present 2026-03-06T23:38:07.943 INFO:teuthology.orchestra.run.vm02.stdout:Not possible to open ports <[8443]>. firewalld.service is not available 2026-03-06T23:38:07.944 INFO:teuthology.orchestra.run.vm02.stdout:Ceph Dashboard is now available at: 2026-03-06T23:38:07.944 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:38:07.944 INFO:teuthology.orchestra.run.vm02.stdout: URL: https://vm02.local:8443/ 2026-03-06T23:38:07.944 INFO:teuthology.orchestra.run.vm02.stdout: User: admin 2026-03-06T23:38:07.944 INFO:teuthology.orchestra.run.vm02.stdout: Password: u26s8gf3zc 2026-03-06T23:38:07.944 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:38:07.945 INFO:teuthology.orchestra.run.vm02.stdout:Saving cluster configuration to /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/config directory 2026-03-06T23:38:08.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:07 vm02 bash[17013]: cephadm 2026-03-06T22:38:06.475520+0000 mgr.vm02.opvwec (mgr.14168) 2 : cephadm [INF] [06/Mar/2026:22:38:06] ENGINE Serving on https://192.168.123.102:7150 2026-03-06T23:38:08.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:07 vm02 bash[17013]: cephadm 2026-03-06T22:38:06.475520+0000 mgr.vm02.opvwec (mgr.14168) 2 : cephadm [INF] [06/Mar/2026:22:38:06] ENGINE Serving on https://192.168.123.102:7150 2026-03-06T23:38:08.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:07 vm02 bash[17013]: cephadm 2026-03-06T22:38:06.475970+0000 mgr.vm02.opvwec (mgr.14168) 3 : cephadm [INF] [06/Mar/2026:22:38:06] ENGINE Client ('192.168.123.102', 59676) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-06T23:38:08.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:07 vm02 bash[17013]: cephadm 2026-03-06T22:38:06.475970+0000 mgr.vm02.opvwec (mgr.14168) 3 : cephadm [INF] [06/Mar/2026:22:38:06] ENGINE Client ('192.168.123.102', 59676) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-06T23:38:08.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:07 vm02 bash[17013]: audit 2026-03-06T22:38:06.504880+0000 mgr.vm02.opvwec (mgr.14168) 4 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-06T23:38:08.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:07 vm02 bash[17013]: audit 2026-03-06T22:38:06.504880+0000 mgr.vm02.opvwec (mgr.14168) 4 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-06T23:38:08.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:07 vm02 bash[17013]: audit 2026-03-06T22:38:06.508667+0000 mgr.vm02.opvwec (mgr.14168) 5 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-06T23:38:08.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:07 vm02 bash[17013]: audit 2026-03-06T22:38:06.508667+0000 mgr.vm02.opvwec (mgr.14168) 5 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-06T23:38:08.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:07 vm02 bash[17013]: cephadm 2026-03-06T22:38:06.577627+0000 mgr.vm02.opvwec (mgr.14168) 6 : cephadm [INF] [06/Mar/2026:22:38:06] ENGINE Serving on http://192.168.123.102:8765 2026-03-06T23:38:08.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:07 vm02 bash[17013]: cephadm 2026-03-06T22:38:06.577627+0000 mgr.vm02.opvwec (mgr.14168) 6 : cephadm [INF] [06/Mar/2026:22:38:06] ENGINE Serving on http://192.168.123.102:8765 2026-03-06T23:38:08.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:07 vm02 bash[17013]: cephadm 2026-03-06T22:38:06.577686+0000 mgr.vm02.opvwec (mgr.14168) 7 : cephadm [INF] [06/Mar/2026:22:38:06] ENGINE Bus STARTED 2026-03-06T23:38:08.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:07 vm02 bash[17013]: cephadm 2026-03-06T22:38:06.577686+0000 mgr.vm02.opvwec (mgr.14168) 7 : cephadm [INF] [06/Mar/2026:22:38:06] ENGINE Bus STARTED 2026-03-06T23:38:08.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:07 vm02 bash[17013]: audit 2026-03-06T22:38:06.884487+0000 mgr.vm02.opvwec (mgr.14168) 8 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:38:08.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:07 vm02 bash[17013]: audit 2026-03-06T22:38:06.884487+0000 mgr.vm02.opvwec (mgr.14168) 8 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:38:08.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:07 vm02 bash[17013]: audit 2026-03-06T22:38:06.991918+0000 mon.vm02 (mon.0) 96 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:08.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:07 vm02 bash[17013]: audit 2026-03-06T22:38:06.991918+0000 mon.vm02 (mon.0) 96 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:08.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:07 vm02 bash[17013]: audit 2026-03-06T22:38:06.994873+0000 mon.vm02 (mon.0) 97 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:08.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:07 vm02 bash[17013]: audit 2026-03-06T22:38:06.994873+0000 mon.vm02 (mon.0) 97 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:08.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:07 vm02 bash[17013]: audit 2026-03-06T22:38:07.364751+0000 mgr.vm02.opvwec (mgr.14168) 9 : audit [DBG] from='client.14182 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:38:08.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:07 vm02 bash[17013]: audit 2026-03-06T22:38:07.364751+0000 mgr.vm02.opvwec (mgr.14168) 9 : audit [DBG] from='client.14182 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:38:08.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:07 vm02 bash[17013]: audit 2026-03-06T22:38:07.517672+0000 mon.vm02 (mon.0) 98 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:08.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:07 vm02 bash[17013]: audit 2026-03-06T22:38:07.517672+0000 mon.vm02 (mon.0) 98 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:08.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:07 vm02 bash[17013]: audit 2026-03-06T22:38:07.898047+0000 mon.vm02 (mon.0) 99 : audit [DBG] from='client.? 192.168.123.102:0/2358045081' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-06T23:38:08.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:07 vm02 bash[17013]: audit 2026-03-06T22:38:07.898047+0000 mon.vm02 (mon.0) 99 : audit [DBG] from='client.? 192.168.123.102:0/2358045081' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-06T23:38:08.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr set mgr/dashboard/cluster/status 2026-03-06T23:38:08.355 INFO:teuthology.orchestra.run.vm02.stdout:You can access the Ceph CLI as following in case of multi-cluster or non-default config: 2026-03-06T23:38:08.355 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:38:08.355 INFO:teuthology.orchestra.run.vm02.stdout: sudo /home/ubuntu/cephtest/cephadm shell --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring 2026-03-06T23:38:08.355 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:38:08.355 INFO:teuthology.orchestra.run.vm02.stdout:Or, if you are only running a single cluster on this host: 2026-03-06T23:38:08.355 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:38:08.355 INFO:teuthology.orchestra.run.vm02.stdout: sudo /home/ubuntu/cephtest/cephadm shell 2026-03-06T23:38:08.355 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:38:08.355 INFO:teuthology.orchestra.run.vm02.stdout:Please consider enabling telemetry to help improve Ceph: 2026-03-06T23:38:08.355 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:38:08.355 INFO:teuthology.orchestra.run.vm02.stdout: ceph telemetry on 2026-03-06T23:38:08.355 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:38:08.355 INFO:teuthology.orchestra.run.vm02.stdout:For more information see: 2026-03-06T23:38:08.355 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:38:08.355 INFO:teuthology.orchestra.run.vm02.stdout: https://docs.ceph.com/en/latest/mgr/telemetry/ 2026-03-06T23:38:08.355 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:38:08.355 INFO:teuthology.orchestra.run.vm02.stdout:Bootstrap complete. 2026-03-06T23:38:08.380 INFO:tasks.cephadm:Fetching config... 2026-03-06T23:38:08.380 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-06T23:38:08.380 DEBUG:teuthology.orchestra.run.vm02:> dd if=/etc/ceph/ceph.conf of=/dev/stdout 2026-03-06T23:38:08.383 INFO:tasks.cephadm:Fetching client.admin keyring... 2026-03-06T23:38:08.383 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-06T23:38:08.383 DEBUG:teuthology.orchestra.run.vm02:> dd if=/etc/ceph/ceph.client.admin.keyring of=/dev/stdout 2026-03-06T23:38:08.428 INFO:tasks.cephadm:Fetching mon keyring... 2026-03-06T23:38:08.428 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-06T23:38:08.428 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/keyring of=/dev/stdout 2026-03-06T23:38:08.476 INFO:tasks.cephadm:Fetching pub ssh key... 2026-03-06T23:38:08.476 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-06T23:38:08.476 DEBUG:teuthology.orchestra.run.vm02:> dd if=/home/ubuntu/cephtest/ceph.pub of=/dev/stdout 2026-03-06T23:38:08.524 INFO:tasks.cephadm:Installing pub ssh key for root users... 2026-03-06T23:38:08.524 DEBUG:teuthology.orchestra.run.vm02:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKaHx0dhdX600/GTtSlC2BTsHil8Xtljqc0QMFr2vvl+nNwDiq/pkhvAudzaFoom8D4EwW9QzUBuZlny8sZe+XZoHha0SDPxU5WJ2gfXveZvIyw3ep/y1tI4ycFyZKdjLa9LTnOW1imgRbZJSmaDLAnRjRtj8FwOaXJBYBH+IJxO4ZwRUdlWRnqObn+I1dKRYlXJXTa97GhJ4CPaJtUYo8cVqBkz7L2HpEXmt5bEcHLASVUbhH6oaoCHa98fFbCzOSxRhekGzrNoLm8fmGJhsdgZktYOrQ6mg0cFGD8ICsiiGtnaouay9cLRq87j/X07kPYTnfIvHWo3IDKq0iRynTo1VVbk5flMzA/wV9uN51lmLq7zL7foq8RrBj/JLlgOEfTMpiXQHF2FpXHDJUSG+EBEVd+u2zjxYBt7uwU1y2Hp2AU7kGK/t64YSj8W931d64K6hXBp4Kf4DP1wwHu2oRLfeOVBuJpbWrD9MpqLTmqbw6iAGx8hWVsFGDj9B46ic= ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-06T23:38:08.575 INFO:teuthology.orchestra.run.vm02.stdout:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKaHx0dhdX600/GTtSlC2BTsHil8Xtljqc0QMFr2vvl+nNwDiq/pkhvAudzaFoom8D4EwW9QzUBuZlny8sZe+XZoHha0SDPxU5WJ2gfXveZvIyw3ep/y1tI4ycFyZKdjLa9LTnOW1imgRbZJSmaDLAnRjRtj8FwOaXJBYBH+IJxO4ZwRUdlWRnqObn+I1dKRYlXJXTa97GhJ4CPaJtUYo8cVqBkz7L2HpEXmt5bEcHLASVUbhH6oaoCHa98fFbCzOSxRhekGzrNoLm8fmGJhsdgZktYOrQ6mg0cFGD8ICsiiGtnaouay9cLRq87j/X07kPYTnfIvHWo3IDKq0iRynTo1VVbk5flMzA/wV9uN51lmLq7zL7foq8RrBj/JLlgOEfTMpiXQHF2FpXHDJUSG+EBEVd+u2zjxYBt7uwU1y2Hp2AU7kGK/t64YSj8W931d64K6hXBp4Kf4DP1wwHu2oRLfeOVBuJpbWrD9MpqLTmqbw6iAGx8hWVsFGDj9B46ic= ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44 2026-03-06T23:38:08.580 DEBUG:teuthology.orchestra.run.vm07:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKaHx0dhdX600/GTtSlC2BTsHil8Xtljqc0QMFr2vvl+nNwDiq/pkhvAudzaFoom8D4EwW9QzUBuZlny8sZe+XZoHha0SDPxU5WJ2gfXveZvIyw3ep/y1tI4ycFyZKdjLa9LTnOW1imgRbZJSmaDLAnRjRtj8FwOaXJBYBH+IJxO4ZwRUdlWRnqObn+I1dKRYlXJXTa97GhJ4CPaJtUYo8cVqBkz7L2HpEXmt5bEcHLASVUbhH6oaoCHa98fFbCzOSxRhekGzrNoLm8fmGJhsdgZktYOrQ6mg0cFGD8ICsiiGtnaouay9cLRq87j/X07kPYTnfIvHWo3IDKq0iRynTo1VVbk5flMzA/wV9uN51lmLq7zL7foq8RrBj/JLlgOEfTMpiXQHF2FpXHDJUSG+EBEVd+u2zjxYBt7uwU1y2Hp2AU7kGK/t64YSj8W931d64K6hXBp4Kf4DP1wwHu2oRLfeOVBuJpbWrD9MpqLTmqbw6iAGx8hWVsFGDj9B46ic= ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-06T23:38:08.592 INFO:teuthology.orchestra.run.vm07.stdout:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKaHx0dhdX600/GTtSlC2BTsHil8Xtljqc0QMFr2vvl+nNwDiq/pkhvAudzaFoom8D4EwW9QzUBuZlny8sZe+XZoHha0SDPxU5WJ2gfXveZvIyw3ep/y1tI4ycFyZKdjLa9LTnOW1imgRbZJSmaDLAnRjRtj8FwOaXJBYBH+IJxO4ZwRUdlWRnqObn+I1dKRYlXJXTa97GhJ4CPaJtUYo8cVqBkz7L2HpEXmt5bEcHLASVUbhH6oaoCHa98fFbCzOSxRhekGzrNoLm8fmGJhsdgZktYOrQ6mg0cFGD8ICsiiGtnaouay9cLRq87j/X07kPYTnfIvHWo3IDKq0iRynTo1VVbk5flMzA/wV9uN51lmLq7zL7foq8RrBj/JLlgOEfTMpiXQHF2FpXHDJUSG+EBEVd+u2zjxYBt7uwU1y2Hp2AU7kGK/t64YSj8W931d64K6hXBp4Kf4DP1wwHu2oRLfeOVBuJpbWrD9MpqLTmqbw6iAGx8hWVsFGDj9B46ic= ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44 2026-03-06T23:38:08.598 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph config set mgr mgr/cephadm/allow_ptrace true 2026-03-06T23:38:09.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:08 vm02 bash[17013]: cluster 2026-03-06T22:38:07.999954+0000 mon.vm02 (mon.0) 100 : cluster [DBG] mgrmap e12: vm02.opvwec(active, since 2s) 2026-03-06T23:38:09.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:08 vm02 bash[17013]: cluster 2026-03-06T22:38:07.999954+0000 mon.vm02 (mon.0) 100 : cluster [DBG] mgrmap e12: vm02.opvwec(active, since 2s) 2026-03-06T23:38:09.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:08 vm02 bash[17013]: audit 2026-03-06T22:38:08.304987+0000 mon.vm02 (mon.0) 101 : audit [INF] from='client.? 192.168.123.102:0/2022481189' entity='client.admin' 2026-03-06T23:38:09.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:08 vm02 bash[17013]: audit 2026-03-06T22:38:08.304987+0000 mon.vm02 (mon.0) 101 : audit [INF] from='client.? 192.168.123.102:0/2022481189' entity='client.admin' 2026-03-06T23:38:11.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:11 vm02 bash[17013]: audit 2026-03-06T22:38:10.669691+0000 mon.vm02 (mon.0) 102 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:11.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:11 vm02 bash[17013]: audit 2026-03-06T22:38:10.669691+0000 mon.vm02 (mon.0) 102 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:11.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:11 vm02 bash[17013]: audit 2026-03-06T22:38:11.303623+0000 mon.vm02 (mon.0) 103 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:11.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:11 vm02 bash[17013]: audit 2026-03-06T22:38:11.303623+0000 mon.vm02 (mon.0) 103 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:12.521 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:38:12.918 INFO:tasks.cephadm:Distributing conf and client.admin keyring to all hosts + 0755 2026-03-06T23:38:12.918 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph orch client-keyring set client.admin '*' --mode 0755 2026-03-06T23:38:13.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:13 vm02 bash[17013]: cluster 2026-03-06T22:38:12.307231+0000 mon.vm02 (mon.0) 104 : cluster [DBG] mgrmap e13: vm02.opvwec(active, since 6s) 2026-03-06T23:38:13.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:13 vm02 bash[17013]: cluster 2026-03-06T22:38:12.307231+0000 mon.vm02 (mon.0) 104 : cluster [DBG] mgrmap e13: vm02.opvwec(active, since 6s) 2026-03-06T23:38:13.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:13 vm02 bash[17013]: audit 2026-03-06T22:38:12.848233+0000 mon.vm02 (mon.0) 105 : audit [INF] from='client.? 192.168.123.102:0/3529944239' entity='client.admin' 2026-03-06T23:38:13.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:13 vm02 bash[17013]: audit 2026-03-06T22:38:12.848233+0000 mon.vm02 (mon.0) 105 : audit [INF] from='client.? 192.168.123.102:0/3529944239' entity='client.admin' 2026-03-06T23:38:17.533 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:38:18.083 INFO:tasks.cephadm:Writing (initial) conf and keyring to vm07 2026-03-06T23:38:18.083 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-06T23:38:18.083 DEBUG:teuthology.orchestra.run.vm07:> dd of=/etc/ceph/ceph.conf 2026-03-06T23:38:18.086 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-06T23:38:18.086 DEBUG:teuthology.orchestra.run.vm07:> dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-06T23:38:18.130 INFO:tasks.cephadm:Adding host vm07 to orchestrator... 2026-03-06T23:38:18.130 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph orch host add vm07 2026-03-06T23:38:18.136 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:17 vm02 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:38:18.363 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:18 vm02 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:38:18.363 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:18 vm02 bash[17013]: audit 2026-03-06T22:38:17.354618+0000 mon.vm02 (mon.0) 106 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:18.468 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:18 vm02 bash[17013]: audit 2026-03-06T22:38:17.354618+0000 mon.vm02 (mon.0) 106 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:18.468 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:18 vm02 bash[17013]: audit 2026-03-06T22:38:17.357172+0000 mon.vm02 (mon.0) 107 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:18.468 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:18 vm02 bash[17013]: audit 2026-03-06T22:38:17.357172+0000 mon.vm02 (mon.0) 107 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:18.468 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:18 vm02 bash[17013]: audit 2026-03-06T22:38:17.357813+0000 mon.vm02 (mon.0) 108 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-06T23:38:18.468 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:18 vm02 bash[17013]: audit 2026-03-06T22:38:17.357813+0000 mon.vm02 (mon.0) 108 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-06T23:38:18.468 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:18 vm02 bash[17013]: audit 2026-03-06T22:38:17.360584+0000 mon.vm02 (mon.0) 109 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:18.468 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:18 vm02 bash[17013]: audit 2026-03-06T22:38:17.360584+0000 mon.vm02 (mon.0) 109 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:18.468 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:18 vm02 bash[17013]: audit 2026-03-06T22:38:17.361457+0000 mon.vm02 (mon.0) 110 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm02", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-06T23:38:18.468 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:18 vm02 bash[17013]: audit 2026-03-06T22:38:17.361457+0000 mon.vm02 (mon.0) 110 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm02", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-06T23:38:18.468 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:18 vm02 bash[17013]: audit 2026-03-06T22:38:17.362367+0000 mon.vm02 (mon.0) 111 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' cmd='[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm02", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]': finished 2026-03-06T23:38:18.468 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:18 vm02 bash[17013]: audit 2026-03-06T22:38:17.362367+0000 mon.vm02 (mon.0) 111 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' cmd='[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm02", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]': finished 2026-03-06T23:38:18.468 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:18 vm02 bash[17013]: audit 2026-03-06T22:38:17.363415+0000 mon.vm02 (mon.0) 112 : audit [DBG] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:38:18.468 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:18 vm02 bash[17013]: audit 2026-03-06T22:38:17.363415+0000 mon.vm02 (mon.0) 112 : audit [DBG] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:38:18.468 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:18 vm02 bash[17013]: cephadm 2026-03-06T22:38:17.363970+0000 mgr.vm02.opvwec (mgr.14168) 10 : cephadm [INF] Deploying daemon ceph-exporter.vm02 on vm02 2026-03-06T23:38:18.468 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:18 vm02 bash[17013]: cephadm 2026-03-06T22:38:17.363970+0000 mgr.vm02.opvwec (mgr.14168) 10 : cephadm [INF] Deploying daemon ceph-exporter.vm02 on vm02 2026-03-06T23:38:18.468 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:18 vm02 bash[17013]: audit 2026-03-06T22:38:17.962347+0000 mon.vm02 (mon.0) 113 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:18.468 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:18 vm02 bash[17013]: audit 2026-03-06T22:38:17.962347+0000 mon.vm02 (mon.0) 113 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:18.468 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:18 vm02 bash[17013]: audit 2026-03-06T22:38:18.289410+0000 mon.vm02 (mon.0) 114 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:18.468 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:18 vm02 bash[17013]: audit 2026-03-06T22:38:18.289410+0000 mon.vm02 (mon.0) 114 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:18.468 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:18 vm02 bash[17013]: audit 2026-03-06T22:38:18.292546+0000 mon.vm02 (mon.0) 115 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:18.469 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:18 vm02 bash[17013]: audit 2026-03-06T22:38:18.292546+0000 mon.vm02 (mon.0) 115 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:18.469 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:18 vm02 bash[17013]: audit 2026-03-06T22:38:18.295660+0000 mon.vm02 (mon.0) 116 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:18.469 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:18 vm02 bash[17013]: audit 2026-03-06T22:38:18.295660+0000 mon.vm02 (mon.0) 116 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:18.469 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:18 vm02 bash[17013]: audit 2026-03-06T22:38:18.300930+0000 mon.vm02 (mon.0) 117 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:18.469 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:18 vm02 bash[17013]: audit 2026-03-06T22:38:18.300930+0000 mon.vm02 (mon.0) 117 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:18.469 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:18 vm02 bash[17013]: audit 2026-03-06T22:38:18.302221+0000 mon.vm02 (mon.0) 118 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm02", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-06T23:38:18.469 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:18 vm02 bash[17013]: audit 2026-03-06T22:38:18.302221+0000 mon.vm02 (mon.0) 118 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm02", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-06T23:38:18.469 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:18 vm02 bash[17013]: audit 2026-03-06T22:38:18.305110+0000 mon.vm02 (mon.0) 119 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.vm02", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished 2026-03-06T23:38:18.469 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:18 vm02 bash[17013]: audit 2026-03-06T22:38:18.305110+0000 mon.vm02 (mon.0) 119 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.vm02", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished 2026-03-06T23:38:18.469 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:18 vm02 bash[17013]: audit 2026-03-06T22:38:18.307964+0000 mon.vm02 (mon.0) 120 : audit [DBG] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:38:18.469 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:18 vm02 bash[17013]: audit 2026-03-06T22:38:18.307964+0000 mon.vm02 (mon.0) 120 : audit [DBG] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:38:18.992 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:18 vm02 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:38:19.354 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:19 vm02 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:38:19.605 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:19 vm02 bash[17013]: audit 2026-03-06T22:38:17.959436+0000 mgr.vm02.opvwec (mgr.14168) 11 : audit [DBG] from='client.14190 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:38:19.605 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:19 vm02 bash[17013]: audit 2026-03-06T22:38:17.959436+0000 mgr.vm02.opvwec (mgr.14168) 11 : audit [DBG] from='client.14190 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:38:19.605 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:19 vm02 bash[17013]: cephadm 2026-03-06T22:38:18.308806+0000 mgr.vm02.opvwec (mgr.14168) 12 : cephadm [INF] Deploying daemon crash.vm02 on vm02 2026-03-06T23:38:19.605 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:19 vm02 bash[17013]: cephadm 2026-03-06T22:38:18.308806+0000 mgr.vm02.opvwec (mgr.14168) 12 : cephadm [INF] Deploying daemon crash.vm02 on vm02 2026-03-06T23:38:19.605 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:19 vm02 bash[17013]: audit 2026-03-06T22:38:19.128865+0000 mon.vm02 (mon.0) 121 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:19.605 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:19 vm02 bash[17013]: audit 2026-03-06T22:38:19.128865+0000 mon.vm02 (mon.0) 121 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:19.605 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:19 vm02 bash[17013]: audit 2026-03-06T22:38:19.132762+0000 mon.vm02 (mon.0) 122 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:19.605 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:19 vm02 bash[17013]: audit 2026-03-06T22:38:19.132762+0000 mon.vm02 (mon.0) 122 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:19.605 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:19 vm02 bash[17013]: audit 2026-03-06T22:38:19.136258+0000 mon.vm02 (mon.0) 123 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:19.605 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:19 vm02 bash[17013]: audit 2026-03-06T22:38:19.136258+0000 mon.vm02 (mon.0) 123 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:19.605 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:19 vm02 bash[17013]: audit 2026-03-06T22:38:19.139615+0000 mon.vm02 (mon.0) 124 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:19.605 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:19 vm02 bash[17013]: audit 2026-03-06T22:38:19.139615+0000 mon.vm02 (mon.0) 124 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:19.606 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:19 vm02 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:38:19.992 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:19 vm02 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:38:20.366 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:20 vm02 bash[17013]: cephadm 2026-03-06T22:38:19.140610+0000 mgr.vm02.opvwec (mgr.14168) 13 : cephadm [INF] Deploying daemon node-exporter.vm02 on vm02 2026-03-06T23:38:20.366 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:20 vm02 bash[17013]: cephadm 2026-03-06T22:38:19.140610+0000 mgr.vm02.opvwec (mgr.14168) 13 : cephadm [INF] Deploying daemon node-exporter.vm02 on vm02 2026-03-06T23:38:20.366 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:20 vm02 bash[17013]: audit 2026-03-06T22:38:19.798529+0000 mon.vm02 (mon.0) 125 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:20.366 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:20 vm02 bash[17013]: audit 2026-03-06T22:38:19.798529+0000 mon.vm02 (mon.0) 125 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:20.366 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:20 vm02 bash[17013]: audit 2026-03-06T22:38:19.801834+0000 mon.vm02 (mon.0) 126 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:20.366 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:20 vm02 bash[17013]: audit 2026-03-06T22:38:19.801834+0000 mon.vm02 (mon.0) 126 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:20.366 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:20 vm02 bash[17013]: audit 2026-03-06T22:38:19.804857+0000 mon.vm02 (mon.0) 127 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:20.366 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:20 vm02 bash[17013]: audit 2026-03-06T22:38:19.804857+0000 mon.vm02 (mon.0) 127 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:20.366 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:20 vm02 bash[17013]: audit 2026-03-06T22:38:19.807308+0000 mon.vm02 (mon.0) 128 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:20.366 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:20 vm02 bash[17013]: audit 2026-03-06T22:38:19.807308+0000 mon.vm02 (mon.0) 128 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:21.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:21 vm02 bash[17013]: cephadm 2026-03-06T22:38:19.812378+0000 mgr.vm02.opvwec (mgr.14168) 14 : cephadm [INF] Deploying daemon alertmanager.vm02 on vm02 2026-03-06T23:38:21.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:21 vm02 bash[17013]: cephadm 2026-03-06T22:38:19.812378+0000 mgr.vm02.opvwec (mgr.14168) 14 : cephadm [INF] Deploying daemon alertmanager.vm02 on vm02 2026-03-06T23:38:21.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:21 vm02 bash[17013]: audit 2026-03-06T22:38:20.531071+0000 mon.vm02 (mon.0) 129 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:21.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:21 vm02 bash[17013]: audit 2026-03-06T22:38:20.531071+0000 mon.vm02 (mon.0) 129 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:23.918 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:23 vm02 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:38:23.987 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:38:24.180 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:24 vm02 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:38:25.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:25 vm02 bash[17013]: audit 2026-03-06T22:38:24.143929+0000 mon.vm02 (mon.0) 130 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:25.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:25 vm02 bash[17013]: audit 2026-03-06T22:38:24.143929+0000 mon.vm02 (mon.0) 130 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:25.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:25 vm02 bash[17013]: audit 2026-03-06T22:38:24.147183+0000 mon.vm02 (mon.0) 131 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:25.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:25 vm02 bash[17013]: audit 2026-03-06T22:38:24.147183+0000 mon.vm02 (mon.0) 131 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:25.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:25 vm02 bash[17013]: audit 2026-03-06T22:38:24.150103+0000 mon.vm02 (mon.0) 132 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:25.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:25 vm02 bash[17013]: audit 2026-03-06T22:38:24.150103+0000 mon.vm02 (mon.0) 132 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:25.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:25 vm02 bash[17013]: audit 2026-03-06T22:38:24.155310+0000 mon.vm02 (mon.0) 133 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:25.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:25 vm02 bash[17013]: audit 2026-03-06T22:38:24.155310+0000 mon.vm02 (mon.0) 133 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:25.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:25 vm02 bash[17013]: audit 2026-03-06T22:38:24.158598+0000 mon.vm02 (mon.0) 134 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:25.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:25 vm02 bash[17013]: audit 2026-03-06T22:38:24.158598+0000 mon.vm02 (mon.0) 134 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:25.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:25 vm02 bash[17013]: audit 2026-03-06T22:38:24.161154+0000 mon.vm02 (mon.0) 135 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:25.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:25 vm02 bash[17013]: audit 2026-03-06T22:38:24.161154+0000 mon.vm02 (mon.0) 135 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:25.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:25 vm02 bash[17013]: cephadm 2026-03-06T22:38:24.165524+0000 mgr.vm02.opvwec (mgr.14168) 15 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-06T23:38:25.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:25 vm02 bash[17013]: cephadm 2026-03-06T22:38:24.165524+0000 mgr.vm02.opvwec (mgr.14168) 15 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-06T23:38:25.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:25 vm02 bash[17013]: audit 2026-03-06T22:38:24.302637+0000 mon.vm02 (mon.0) 136 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:25.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:25 vm02 bash[17013]: audit 2026-03-06T22:38:24.302637+0000 mon.vm02 (mon.0) 136 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:25.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:25 vm02 bash[17013]: audit 2026-03-06T22:38:24.306401+0000 mon.vm02 (mon.0) 137 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:25.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:25 vm02 bash[17013]: audit 2026-03-06T22:38:24.306401+0000 mon.vm02 (mon.0) 137 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:25.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:25 vm02 bash[17013]: audit 2026-03-06T22:38:24.307730+0000 mon.vm02 (mon.0) 138 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-06T23:38:25.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:25 vm02 bash[17013]: audit 2026-03-06T22:38:24.307730+0000 mon.vm02 (mon.0) 138 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-06T23:38:25.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:25 vm02 bash[17013]: audit 2026-03-06T22:38:24.308007+0000 mgr.vm02.opvwec (mgr.14168) 16 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-06T23:38:25.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:25 vm02 bash[17013]: audit 2026-03-06T22:38:24.308007+0000 mgr.vm02.opvwec (mgr.14168) 16 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-06T23:38:25.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:25 vm02 bash[17013]: audit 2026-03-06T22:38:24.310204+0000 mon.vm02 (mon.0) 139 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:25.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:25 vm02 bash[17013]: audit 2026-03-06T22:38:24.310204+0000 mon.vm02 (mon.0) 139 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:25.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:25 vm02 bash[17013]: cephadm 2026-03-06T22:38:24.316756+0000 mgr.vm02.opvwec (mgr.14168) 17 : cephadm [INF] Deploying daemon grafana.vm02 on vm02 2026-03-06T23:38:25.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:25 vm02 bash[17013]: cephadm 2026-03-06T22:38:24.316756+0000 mgr.vm02.opvwec (mgr.14168) 17 : cephadm [INF] Deploying daemon grafana.vm02 on vm02 2026-03-06T23:38:26.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:26 vm02 bash[17013]: audit 2026-03-06T22:38:24.692195+0000 mgr.vm02.opvwec (mgr.14168) 18 : audit [DBG] from='client.14193 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm07", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:38:26.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:26 vm02 bash[17013]: audit 2026-03-06T22:38:24.692195+0000 mgr.vm02.opvwec (mgr.14168) 18 : audit [DBG] from='client.14193 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm07", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:38:26.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:26 vm02 bash[17013]: audit 2026-03-06T22:38:25.534517+0000 mon.vm02 (mon.0) 140 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:26.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:26 vm02 bash[17013]: audit 2026-03-06T22:38:25.534517+0000 mon.vm02 (mon.0) 140 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:27.321 INFO:teuthology.orchestra.run.vm02.stdout:Added host 'vm07' with addr '192.168.123.107' 2026-03-06T23:38:27.431 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:27 vm02 bash[17013]: cluster 2026-03-06T22:38:25.505309+0000 mgr.vm02.opvwec (mgr.14168) 19 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:38:27.431 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:27 vm02 bash[17013]: cluster 2026-03-06T22:38:25.505309+0000 mgr.vm02.opvwec (mgr.14168) 19 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:38:27.431 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:27 vm02 bash[17013]: cephadm 2026-03-06T22:38:25.553173+0000 mgr.vm02.opvwec (mgr.14168) 20 : cephadm [INF] Deploying cephadm binary to vm07 2026-03-06T23:38:27.431 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:27 vm02 bash[17013]: cephadm 2026-03-06T22:38:25.553173+0000 mgr.vm02.opvwec (mgr.14168) 20 : cephadm [INF] Deploying cephadm binary to vm07 2026-03-06T23:38:27.431 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph orch host ls --format=json 2026-03-06T23:38:28.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:28 vm02 bash[17013]: audit 2026-03-06T22:38:27.314943+0000 mon.vm02 (mon.0) 141 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:28.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:28 vm02 bash[17013]: audit 2026-03-06T22:38:27.314943+0000 mon.vm02 (mon.0) 141 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:28.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:28 vm02 bash[17013]: cephadm 2026-03-06T22:38:27.315480+0000 mgr.vm02.opvwec (mgr.14168) 21 : cephadm [INF] Added host vm07 2026-03-06T23:38:28.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:28 vm02 bash[17013]: cephadm 2026-03-06T22:38:27.315480+0000 mgr.vm02.opvwec (mgr.14168) 21 : cephadm [INF] Added host vm07 2026-03-06T23:38:29.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:29 vm02 bash[17013]: cluster 2026-03-06T22:38:27.505434+0000 mgr.vm02.opvwec (mgr.14168) 22 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:38:29.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:29 vm02 bash[17013]: cluster 2026-03-06T22:38:27.505434+0000 mgr.vm02.opvwec (mgr.14168) 22 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:38:31.713 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:31 vm02 bash[17013]: cluster 2026-03-06T22:38:29.505643+0000 mgr.vm02.opvwec (mgr.14168) 23 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:38:31.713 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:31 vm02 bash[17013]: cluster 2026-03-06T22:38:29.505643+0000 mgr.vm02.opvwec (mgr.14168) 23 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:38:32.253 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:38:32.905 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:32 vm02 bash[17013]: cluster 2026-03-06T22:38:31.505841+0000 mgr.vm02.opvwec (mgr.14168) 24 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:38:32.906 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:32 vm02 bash[17013]: cluster 2026-03-06T22:38:31.505841+0000 mgr.vm02.opvwec (mgr.14168) 24 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:38:33.043 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:38:33.044 INFO:teuthology.orchestra.run.vm02.stdout:[{"addr": "192.168.123.102", "hostname": "vm02", "labels": [], "status": ""}, {"addr": "192.168.123.107", "hostname": "vm07", "labels": [], "status": ""}] 2026-03-06T23:38:33.168 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:33 vm02 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:38:33.169 INFO:tasks.cephadm:Setting crush tunables to default 2026-03-06T23:38:33.170 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph osd crush tunables default 2026-03-06T23:38:33.491 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:33 vm02 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:38:34.585 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:34 vm02 bash[17013]: audit 2026-03-06T22:38:33.038135+0000 mgr.vm02.opvwec (mgr.14168) 25 : audit [DBG] from='client.14195 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:38:34.585 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:34 vm02 bash[17013]: audit 2026-03-06T22:38:33.038135+0000 mgr.vm02.opvwec (mgr.14168) 25 : audit [DBG] from='client.14195 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:38:34.585 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:34 vm02 bash[17013]: audit 2026-03-06T22:38:33.363959+0000 mon.vm02 (mon.0) 142 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:34.586 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:34 vm02 bash[17013]: audit 2026-03-06T22:38:33.363959+0000 mon.vm02 (mon.0) 142 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:34.586 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:34 vm02 bash[17013]: audit 2026-03-06T22:38:33.367168+0000 mon.vm02 (mon.0) 143 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:34.586 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:34 vm02 bash[17013]: audit 2026-03-06T22:38:33.367168+0000 mon.vm02 (mon.0) 143 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:34.586 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:34 vm02 bash[17013]: audit 2026-03-06T22:38:33.370001+0000 mon.vm02 (mon.0) 144 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:34.586 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:34 vm02 bash[17013]: audit 2026-03-06T22:38:33.370001+0000 mon.vm02 (mon.0) 144 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:34.586 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:34 vm02 bash[17013]: audit 2026-03-06T22:38:33.372690+0000 mon.vm02 (mon.0) 145 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:34.586 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:34 vm02 bash[17013]: audit 2026-03-06T22:38:33.372690+0000 mon.vm02 (mon.0) 145 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:34.586 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:34 vm02 bash[17013]: audit 2026-03-06T22:38:33.375699+0000 mon.vm02 (mon.0) 146 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:34.586 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:34 vm02 bash[17013]: audit 2026-03-06T22:38:33.375699+0000 mon.vm02 (mon.0) 146 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:34.586 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:34 vm02 bash[17013]: audit 2026-03-06T22:38:33.378772+0000 mon.vm02 (mon.0) 147 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:34.586 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:34 vm02 bash[17013]: audit 2026-03-06T22:38:33.378772+0000 mon.vm02 (mon.0) 147 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:34.586 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:34 vm02 bash[17013]: audit 2026-03-06T22:38:33.382485+0000 mon.vm02 (mon.0) 148 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:34.586 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:34 vm02 bash[17013]: audit 2026-03-06T22:38:33.382485+0000 mon.vm02 (mon.0) 148 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:34.586 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:34 vm02 bash[17013]: audit 2026-03-06T22:38:33.384557+0000 mon.vm02 (mon.0) 149 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:34.586 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:34 vm02 bash[17013]: audit 2026-03-06T22:38:33.384557+0000 mon.vm02 (mon.0) 149 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:35.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:35 vm02 bash[17013]: cluster 2026-03-06T22:38:33.506025+0000 mgr.vm02.opvwec (mgr.14168) 26 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:38:35.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:35 vm02 bash[17013]: cluster 2026-03-06T22:38:33.506025+0000 mgr.vm02.opvwec (mgr.14168) 26 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:38:35.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:35 vm02 bash[17013]: cephadm 2026-03-06T22:38:33.571369+0000 mgr.vm02.opvwec (mgr.14168) 27 : cephadm [INF] Deploying daemon prometheus.vm02 on vm02 2026-03-06T23:38:35.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:35 vm02 bash[17013]: cephadm 2026-03-06T22:38:33.571369+0000 mgr.vm02.opvwec (mgr.14168) 27 : cephadm [INF] Deploying daemon prometheus.vm02 on vm02 2026-03-06T23:38:36.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:36 vm02 bash[17013]: cluster 2026-03-06T22:38:35.506224+0000 mgr.vm02.opvwec (mgr.14168) 28 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:38:36.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:36 vm02 bash[17013]: cluster 2026-03-06T22:38:35.506224+0000 mgr.vm02.opvwec (mgr.14168) 28 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:38:36.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:36 vm02 bash[17013]: audit 2026-03-06T22:38:35.538692+0000 mon.vm02 (mon.0) 150 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:36.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:36 vm02 bash[17013]: audit 2026-03-06T22:38:35.538692+0000 mon.vm02 (mon.0) 150 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:38.018 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:38:38.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:38 vm02 bash[17013]: cluster 2026-03-06T22:38:37.506430+0000 mgr.vm02.opvwec (mgr.14168) 29 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:38:38.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:38 vm02 bash[17013]: cluster 2026-03-06T22:38:37.506430+0000 mgr.vm02.opvwec (mgr.14168) 29 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:38:39.602 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:39 vm02 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:38:39.602 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:39 vm02 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:38:39.636 INFO:teuthology.orchestra.run.vm02.stderr:adjusted tunables profile to default 2026-03-06T23:38:39.716 INFO:tasks.cephadm:Adding mon.vm02 on vm02 2026-03-06T23:38:39.716 INFO:tasks.cephadm:Adding mon.vm07 on vm07 2026-03-06T23:38:39.716 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph orch apply mon '2;vm02:192.168.123.102=vm02;vm07:192.168.123.107=vm07' 2026-03-06T23:38:39.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:39 vm02 bash[17013]: audit 2026-03-06T22:38:39.158705+0000 mon.vm02 (mon.0) 151 : audit [INF] from='client.? 192.168.123.102:0/800752930' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-06T23:38:39.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:39 vm02 bash[17013]: audit 2026-03-06T22:38:39.158705+0000 mon.vm02 (mon.0) 151 : audit [INF] from='client.? 192.168.123.102:0/800752930' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-06T23:38:40.901 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:40 vm02 bash[17013]: cluster 2026-03-06T22:38:39.506610+0000 mgr.vm02.opvwec (mgr.14168) 30 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:38:40.901 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:40 vm02 bash[17013]: cluster 2026-03-06T22:38:39.506610+0000 mgr.vm02.opvwec (mgr.14168) 30 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:38:40.901 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:40 vm02 bash[17013]: audit 2026-03-06T22:38:39.628016+0000 mon.vm02 (mon.0) 152 : audit [INF] from='client.? 192.168.123.102:0/800752930' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-06T23:38:40.901 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:40 vm02 bash[17013]: audit 2026-03-06T22:38:39.628016+0000 mon.vm02 (mon.0) 152 : audit [INF] from='client.? 192.168.123.102:0/800752930' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-06T23:38:40.901 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:40 vm02 bash[17013]: audit 2026-03-06T22:38:39.629355+0000 mon.vm02 (mon.0) 153 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:40.901 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:40 vm02 bash[17013]: audit 2026-03-06T22:38:39.629355+0000 mon.vm02 (mon.0) 153 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:40.901 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:40 vm02 bash[17013]: cluster 2026-03-06T22:38:39.629546+0000 mon.vm02 (mon.0) 154 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-06T23:38:40.902 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:40 vm02 bash[17013]: cluster 2026-03-06T22:38:39.629546+0000 mon.vm02 (mon.0) 154 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-06T23:38:40.902 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:40 vm02 bash[17013]: audit 2026-03-06T22:38:39.634177+0000 mon.vm02 (mon.0) 155 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:40.902 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:40 vm02 bash[17013]: audit 2026-03-06T22:38:39.634177+0000 mon.vm02 (mon.0) 155 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:40.902 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:40 vm02 bash[17013]: audit 2026-03-06T22:38:39.636451+0000 mon.vm02 (mon.0) 156 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:40.902 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:40 vm02 bash[17013]: audit 2026-03-06T22:38:39.636451+0000 mon.vm02 (mon.0) 156 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:40.902 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:40 vm02 bash[17013]: audit 2026-03-06T22:38:39.637767+0000 mon.vm02 (mon.0) 157 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-06T23:38:40.902 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:40 vm02 bash[17013]: audit 2026-03-06T22:38:39.637767+0000 mon.vm02 (mon.0) 157 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-06T23:38:40.902 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:40 vm02 bash[17013]: audit 2026-03-06T22:38:40.542414+0000 mon.vm02 (mon.0) 158 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:40.902 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:40 vm02 bash[17013]: audit 2026-03-06T22:38:40.542414+0000 mon.vm02 (mon.0) 158 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' 2026-03-06T23:38:40.981 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-06T23:38:41.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:41 vm02 bash[17013]: audit 2026-03-06T22:38:40.641886+0000 mon.vm02 (mon.0) 159 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-06T23:38:41.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:41 vm02 bash[17013]: audit 2026-03-06T22:38:40.641886+0000 mon.vm02 (mon.0) 159 : audit [INF] from='mgr.14168 192.168.123.102:0/1936867442' entity='mgr.vm02.opvwec' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-06T23:38:41.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:41 vm02 bash[17013]: cluster 2026-03-06T22:38:40.643434+0000 mon.vm02 (mon.0) 160 : cluster [DBG] mgrmap e14: vm02.opvwec(active, since 35s) 2026-03-06T23:38:41.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:41 vm02 bash[17013]: cluster 2026-03-06T22:38:40.643434+0000 mon.vm02 (mon.0) 160 : cluster [DBG] mgrmap e14: vm02.opvwec(active, since 35s) 2026-03-06T23:38:42.005 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-06T23:38:50.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:50 vm02 bash[17013]: cluster 2026-03-06T22:38:50.732720+0000 mon.vm02 (mon.0) 161 : cluster [INF] Active manager daemon vm02.opvwec restarted 2026-03-06T23:38:50.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:50 vm02 bash[17013]: cluster 2026-03-06T22:38:50.732720+0000 mon.vm02 (mon.0) 161 : cluster [INF] Active manager daemon vm02.opvwec restarted 2026-03-06T23:38:50.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:50 vm02 bash[17013]: cluster 2026-03-06T22:38:50.732973+0000 mon.vm02 (mon.0) 162 : cluster [INF] Activating manager daemon vm02.opvwec 2026-03-06T23:38:50.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:50 vm02 bash[17013]: cluster 2026-03-06T22:38:50.732973+0000 mon.vm02 (mon.0) 162 : cluster [INF] Activating manager daemon vm02.opvwec 2026-03-06T23:38:50.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:50 vm02 bash[17013]: cluster 2026-03-06T22:38:50.738820+0000 mon.vm02 (mon.0) 163 : cluster [DBG] osdmap e5: 0 total, 0 up, 0 in 2026-03-06T23:38:50.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:50 vm02 bash[17013]: cluster 2026-03-06T22:38:50.738820+0000 mon.vm02 (mon.0) 163 : cluster [DBG] osdmap e5: 0 total, 0 up, 0 in 2026-03-06T23:38:50.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:50 vm02 bash[17013]: cluster 2026-03-06T22:38:50.742618+0000 mon.vm02 (mon.0) 164 : cluster [DBG] mgrmap e15: vm02.opvwec(active, starting, since 0.00974227s) 2026-03-06T23:38:50.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:50 vm02 bash[17013]: cluster 2026-03-06T22:38:50.742618+0000 mon.vm02 (mon.0) 164 : cluster [DBG] mgrmap e15: vm02.opvwec(active, starting, since 0.00974227s) 2026-03-06T23:38:50.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:50 vm02 bash[17013]: audit 2026-03-06T22:38:50.744507+0000 mon.vm02 (mon.0) 165 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm02"}]: dispatch 2026-03-06T23:38:50.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:50 vm02 bash[17013]: audit 2026-03-06T22:38:50.744507+0000 mon.vm02 (mon.0) 165 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm02"}]: dispatch 2026-03-06T23:38:50.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:50 vm02 bash[17013]: audit 2026-03-06T22:38:50.745399+0000 mon.vm02 (mon.0) 166 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mgr metadata", "who": "vm02.opvwec", "id": "vm02.opvwec"}]: dispatch 2026-03-06T23:38:50.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:50 vm02 bash[17013]: audit 2026-03-06T22:38:50.745399+0000 mon.vm02 (mon.0) 166 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mgr metadata", "who": "vm02.opvwec", "id": "vm02.opvwec"}]: dispatch 2026-03-06T23:38:50.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:50 vm02 bash[17013]: audit 2026-03-06T22:38:50.746120+0000 mon.vm02 (mon.0) 167 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-06T23:38:50.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:50 vm02 bash[17013]: audit 2026-03-06T22:38:50.746120+0000 mon.vm02 (mon.0) 167 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-06T23:38:50.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:50 vm02 bash[17013]: audit 2026-03-06T22:38:50.746213+0000 mon.vm02 (mon.0) 168 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-06T23:38:50.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:50 vm02 bash[17013]: audit 2026-03-06T22:38:50.746213+0000 mon.vm02 (mon.0) 168 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-06T23:38:50.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:50 vm02 bash[17013]: audit 2026-03-06T22:38:50.746285+0000 mon.vm02 (mon.0) 169 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-06T23:38:50.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:50 vm02 bash[17013]: audit 2026-03-06T22:38:50.746285+0000 mon.vm02 (mon.0) 169 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-06T23:38:50.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:50 vm02 bash[17013]: cluster 2026-03-06T22:38:50.750892+0000 mon.vm02 (mon.0) 170 : cluster [INF] Manager daemon vm02.opvwec is now available 2026-03-06T23:38:50.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:50 vm02 bash[17013]: cluster 2026-03-06T22:38:50.750892+0000 mon.vm02 (mon.0) 170 : cluster [INF] Manager daemon vm02.opvwec is now available 2026-03-06T23:38:50.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:50 vm02 bash[17013]: audit 2026-03-06T22:38:50.776164+0000 mon.vm02 (mon.0) 171 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:38:50.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:50 vm02 bash[17013]: audit 2026-03-06T22:38:50.776164+0000 mon.vm02 (mon.0) 171 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:38:51.754 INFO:teuthology.orchestra.run.vm07.stdout:Scheduled mon update... 2026-03-06T23:38:51.846 DEBUG:teuthology.orchestra.run.vm07:mon.vm07> sudo journalctl -f -n 0 -u ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@mon.vm07.service 2026-03-06T23:38:51.847 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-06T23:38:51.847 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph mon dump -f json 2026-03-06T23:38:52.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:51 vm02 bash[17013]: audit 2026-03-06T22:38:50.788360+0000 mon.vm02 (mon.0) 172 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:38:52.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:51 vm02 bash[17013]: audit 2026-03-06T22:38:50.788360+0000 mon.vm02 (mon.0) 172 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:38:52.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:51 vm02 bash[17013]: audit 2026-03-06T22:38:50.799611+0000 mon.vm02 (mon.0) 173 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm02.opvwec/mirror_snapshot_schedule"}]: dispatch 2026-03-06T23:38:52.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:51 vm02 bash[17013]: audit 2026-03-06T22:38:50.799611+0000 mon.vm02 (mon.0) 173 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm02.opvwec/mirror_snapshot_schedule"}]: dispatch 2026-03-06T23:38:52.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:51 vm02 bash[17013]: audit 2026-03-06T22:38:50.800204+0000 mon.vm02 (mon.0) 174 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:38:52.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:51 vm02 bash[17013]: audit 2026-03-06T22:38:50.800204+0000 mon.vm02 (mon.0) 174 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:38:52.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:51 vm02 bash[17013]: audit 2026-03-06T22:38:50.858231+0000 mon.vm02 (mon.0) 175 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm02.opvwec/trash_purge_schedule"}]: dispatch 2026-03-06T23:38:52.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:51 vm02 bash[17013]: audit 2026-03-06T22:38:50.858231+0000 mon.vm02 (mon.0) 175 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm02.opvwec/trash_purge_schedule"}]: dispatch 2026-03-06T23:38:52.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:51 vm02 bash[17013]: audit 2026-03-06T22:38:51.590251+0000 mon.vm02 (mon.0) 176 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:38:52.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:51 vm02 bash[17013]: audit 2026-03-06T22:38:51.590251+0000 mon.vm02 (mon.0) 176 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:38:52.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:51 vm02 bash[17013]: cluster 2026-03-06T22:38:51.742808+0000 mon.vm02 (mon.0) 177 : cluster [DBG] mgrmap e16: vm02.opvwec(active, since 1.00993s) 2026-03-06T23:38:52.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:51 vm02 bash[17013]: cluster 2026-03-06T22:38:51.742808+0000 mon.vm02 (mon.0) 177 : cluster [DBG] mgrmap e16: vm02.opvwec(active, since 1.00993s) 2026-03-06T23:38:52.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:51 vm02 bash[17013]: audit 2026-03-06T22:38:51.748445+0000 mon.vm02 (mon.0) 178 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:38:52.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:51 vm02 bash[17013]: audit 2026-03-06T22:38:51.748445+0000 mon.vm02 (mon.0) 178 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:38:53.149 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-06T23:38:53.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:52 vm02 bash[17013]: cephadm 2026-03-06T22:38:51.745838+0000 mgr.vm02.opvwec (mgr.14199) 2 : cephadm [INF] Saving service mon spec with placement vm02:192.168.123.102=vm02;vm07:192.168.123.107=vm07;count:2 2026-03-06T23:38:53.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:52 vm02 bash[17013]: cephadm 2026-03-06T22:38:51.745838+0000 mgr.vm02.opvwec (mgr.14199) 2 : cephadm [INF] Saving service mon spec with placement vm02:192.168.123.102=vm02;vm07:192.168.123.107=vm07;count:2 2026-03-06T23:38:53.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:52 vm02 bash[17013]: cephadm 2026-03-06T22:38:52.017038+0000 mgr.vm02.opvwec (mgr.14199) 3 : cephadm [INF] [06/Mar/2026:22:38:52] ENGINE Bus STARTING 2026-03-06T23:38:53.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:52 vm02 bash[17013]: cephadm 2026-03-06T22:38:52.017038+0000 mgr.vm02.opvwec (mgr.14199) 3 : cephadm [INF] [06/Mar/2026:22:38:52] ENGINE Bus STARTING 2026-03-06T23:38:53.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:52 vm02 bash[17013]: cephadm 2026-03-06T22:38:52.119486+0000 mgr.vm02.opvwec (mgr.14199) 4 : cephadm [INF] [06/Mar/2026:22:38:52] ENGINE Serving on http://192.168.123.102:8765 2026-03-06T23:38:53.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:52 vm02 bash[17013]: cephadm 2026-03-06T22:38:52.119486+0000 mgr.vm02.opvwec (mgr.14199) 4 : cephadm [INF] [06/Mar/2026:22:38:52] ENGINE Serving on http://192.168.123.102:8765 2026-03-06T23:38:53.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:52 vm02 bash[17013]: cephadm 2026-03-06T22:38:52.230858+0000 mgr.vm02.opvwec (mgr.14199) 5 : cephadm [INF] [06/Mar/2026:22:38:52] ENGINE Serving on https://192.168.123.102:7150 2026-03-06T23:38:53.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:52 vm02 bash[17013]: cephadm 2026-03-06T22:38:52.230858+0000 mgr.vm02.opvwec (mgr.14199) 5 : cephadm [INF] [06/Mar/2026:22:38:52] ENGINE Serving on https://192.168.123.102:7150 2026-03-06T23:38:53.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:52 vm02 bash[17013]: cephadm 2026-03-06T22:38:52.230895+0000 mgr.vm02.opvwec (mgr.14199) 6 : cephadm [INF] [06/Mar/2026:22:38:52] ENGINE Bus STARTED 2026-03-06T23:38:53.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:52 vm02 bash[17013]: cephadm 2026-03-06T22:38:52.230895+0000 mgr.vm02.opvwec (mgr.14199) 6 : cephadm [INF] [06/Mar/2026:22:38:52] ENGINE Bus STARTED 2026-03-06T23:38:53.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:52 vm02 bash[17013]: cephadm 2026-03-06T22:38:52.231292+0000 mgr.vm02.opvwec (mgr.14199) 7 : cephadm [INF] [06/Mar/2026:22:38:52] ENGINE Client ('192.168.123.102', 48630) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-06T23:38:53.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:52 vm02 bash[17013]: cephadm 2026-03-06T22:38:52.231292+0000 mgr.vm02.opvwec (mgr.14199) 7 : cephadm [INF] [06/Mar/2026:22:38:52] ENGINE Client ('192.168.123.102', 48630) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-06T23:38:54.183 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-06T23:38:54.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:53 vm02 bash[17013]: cluster 2026-03-06T22:38:52.797188+0000 mon.vm02 (mon.0) 179 : cluster [DBG] mgrmap e17: vm02.opvwec(active, since 2s) 2026-03-06T23:38:54.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:53 vm02 bash[17013]: cluster 2026-03-06T22:38:52.797188+0000 mon.vm02 (mon.0) 179 : cluster [DBG] mgrmap e17: vm02.opvwec(active, since 2s) 2026-03-06T23:38:54.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:53 vm02 bash[17013]: audit 2026-03-06T22:38:52.895703+0000 mon.vm02 (mon.0) 180 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:38:54.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:53 vm02 bash[17013]: audit 2026-03-06T22:38:52.895703+0000 mon.vm02 (mon.0) 180 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:38:54.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:53 vm02 bash[17013]: audit 2026-03-06T22:38:53.473160+0000 mon.vm02 (mon.0) 181 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:38:54.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:53 vm02 bash[17013]: audit 2026-03-06T22:38:53.473160+0000 mon.vm02 (mon.0) 181 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:38:54.567 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-06T23:38:54.567 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":1,"fsid":"f8b8c16a-19ac-11f1-87e7-9b7402b99c44","modified":"2026-03-06T22:37:18.048883Z","created":"2026-03-06T22:37:18.048883Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm02","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:3300","nonce":0},{"type":"v1","addr":"192.168.123.102:6789","nonce":0}]},"addr":"192.168.123.102:6789/0","public_addr":"192.168.123.102:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-06T23:38:54.567 INFO:teuthology.orchestra.run.vm07.stderr:dumped monmap epoch 1 2026-03-06T23:38:55.242 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:54 vm02 bash[17013]: audit 2026-03-06T22:38:54.562323+0000 mon.vm02 (mon.0) 182 : audit [DBG] from='client.? 192.168.123.107:0/2244693233' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-06T23:38:55.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:54 vm02 bash[17013]: audit 2026-03-06T22:38:54.562323+0000 mon.vm02 (mon.0) 182 : audit [DBG] from='client.? 192.168.123.107:0/2244693233' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-06T23:38:55.629 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-06T23:38:55.629 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph mon dump -f json 2026-03-06T23:38:57.535 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/config/ceph.conf 2026-03-06T23:38:57.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:57 vm02 bash[17013]: audit 2026-03-06T22:38:56.551308+0000 mon.vm02 (mon.0) 183 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:38:57.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:57 vm02 bash[17013]: audit 2026-03-06T22:38:56.551308+0000 mon.vm02 (mon.0) 183 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:38:57.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:57 vm02 bash[17013]: audit 2026-03-06T22:38:56.553982+0000 mon.vm02 (mon.0) 184 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:38:57.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:57 vm02 bash[17013]: audit 2026-03-06T22:38:56.553982+0000 mon.vm02 (mon.0) 184 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:38:57.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:57 vm02 bash[17013]: audit 2026-03-06T22:38:56.556321+0000 mon.vm02 (mon.0) 185 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:38:57.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:57 vm02 bash[17013]: audit 2026-03-06T22:38:56.556321+0000 mon.vm02 (mon.0) 185 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:38:57.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:57 vm02 bash[17013]: audit 2026-03-06T22:38:56.558017+0000 mon.vm02 (mon.0) 186 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:38:57.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:57 vm02 bash[17013]: audit 2026-03-06T22:38:56.558017+0000 mon.vm02 (mon.0) 186 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:38:57.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:57 vm02 bash[17013]: audit 2026-03-06T22:38:56.558546+0000 mon.vm02 (mon.0) 187 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-06T23:38:57.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:57 vm02 bash[17013]: audit 2026-03-06T22:38:56.558546+0000 mon.vm02 (mon.0) 187 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-06T23:38:57.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:57 vm02 bash[17013]: audit 2026-03-06T22:38:56.727358+0000 mon.vm02 (mon.0) 188 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:38:57.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:57 vm02 bash[17013]: audit 2026-03-06T22:38:56.727358+0000 mon.vm02 (mon.0) 188 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:38:57.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:57 vm02 bash[17013]: audit 2026-03-06T22:38:56.730146+0000 mon.vm02 (mon.0) 189 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:38:57.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:57 vm02 bash[17013]: audit 2026-03-06T22:38:56.730146+0000 mon.vm02 (mon.0) 189 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:38:57.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:57 vm02 bash[17013]: audit 2026-03-06T22:38:57.307314+0000 mon.vm02 (mon.0) 190 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:38:57.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:57 vm02 bash[17013]: audit 2026-03-06T22:38:57.307314+0000 mon.vm02 (mon.0) 190 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:38:57.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:57 vm02 bash[17013]: audit 2026-03-06T22:38:57.310209+0000 mon.vm02 (mon.0) 191 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:38:57.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:57 vm02 bash[17013]: audit 2026-03-06T22:38:57.310209+0000 mon.vm02 (mon.0) 191 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:38:57.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:57 vm02 bash[17013]: audit 2026-03-06T22:38:57.310981+0000 mon.vm02 (mon.0) 192 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-06T23:38:57.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:57 vm02 bash[17013]: audit 2026-03-06T22:38:57.310981+0000 mon.vm02 (mon.0) 192 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-06T23:38:57.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:57 vm02 bash[17013]: audit 2026-03-06T22:38:57.311583+0000 mon.vm02 (mon.0) 193 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:38:57.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:57 vm02 bash[17013]: audit 2026-03-06T22:38:57.311583+0000 mon.vm02 (mon.0) 193 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:38:57.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:57 vm02 bash[17013]: audit 2026-03-06T22:38:57.311948+0000 mon.vm02 (mon.0) 194 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T23:38:57.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:57 vm02 bash[17013]: audit 2026-03-06T22:38:57.311948+0000 mon.vm02 (mon.0) 194 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T23:38:57.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:57 vm02 bash[17013]: audit 2026-03-06T22:38:57.460493+0000 mon.vm02 (mon.0) 195 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:38:57.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:57 vm02 bash[17013]: audit 2026-03-06T22:38:57.460493+0000 mon.vm02 (mon.0) 195 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:38:57.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:57 vm02 bash[17013]: audit 2026-03-06T22:38:57.463277+0000 mon.vm02 (mon.0) 196 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:38:57.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:57 vm02 bash[17013]: audit 2026-03-06T22:38:57.463277+0000 mon.vm02 (mon.0) 196 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:38:57.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:57 vm02 bash[17013]: audit 2026-03-06T22:38:57.471684+0000 mon.vm02 (mon.0) 197 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:38:57.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:57 vm02 bash[17013]: audit 2026-03-06T22:38:57.471684+0000 mon.vm02 (mon.0) 197 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:38:57.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:57 vm02 bash[17013]: audit 2026-03-06T22:38:57.473418+0000 mon.vm02 (mon.0) 198 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:38:57.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:57 vm02 bash[17013]: audit 2026-03-06T22:38:57.473418+0000 mon.vm02 (mon.0) 198 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:38:57.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:57 vm02 bash[17013]: audit 2026-03-06T22:38:57.475424+0000 mon.vm02 (mon.0) 199 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:38:57.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:57 vm02 bash[17013]: audit 2026-03-06T22:38:57.475424+0000 mon.vm02 (mon.0) 199 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:38:57.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:57 vm02 bash[17013]: audit 2026-03-06T22:38:57.476188+0000 mon.vm02 (mon.0) 200 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm07", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-06T23:38:57.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:57 vm02 bash[17013]: audit 2026-03-06T22:38:57.476188+0000 mon.vm02 (mon.0) 200 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm07", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-06T23:38:57.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:57 vm02 bash[17013]: audit 2026-03-06T22:38:57.477104+0000 mon.vm02 (mon.0) 201 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd='[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm07", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]': finished 2026-03-06T23:38:57.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:57 vm02 bash[17013]: audit 2026-03-06T22:38:57.477104+0000 mon.vm02 (mon.0) 201 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd='[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm07", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]': finished 2026-03-06T23:38:57.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:57 vm02 bash[17013]: audit 2026-03-06T22:38:57.478024+0000 mon.vm02 (mon.0) 202 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:38:57.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:57 vm02 bash[17013]: audit 2026-03-06T22:38:57.478024+0000 mon.vm02 (mon.0) 202 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:38:57.998 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-06T23:38:57.998 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":1,"fsid":"f8b8c16a-19ac-11f1-87e7-9b7402b99c44","modified":"2026-03-06T22:37:18.048883Z","created":"2026-03-06T22:37:18.048883Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm02","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:3300","nonce":0},{"type":"v1","addr":"192.168.123.102:6789","nonce":0}]},"addr":"192.168.123.102:6789/0","public_addr":"192.168.123.102:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-06T23:38:57.998 INFO:teuthology.orchestra.run.vm07.stderr:dumped monmap epoch 1 2026-03-06T23:38:58.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:58 vm02 bash[17013]: cephadm 2026-03-06T22:38:57.312534+0000 mgr.vm02.opvwec (mgr.14199) 8 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-06T23:38:58.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:58 vm02 bash[17013]: cephadm 2026-03-06T22:38:57.312534+0000 mgr.vm02.opvwec (mgr.14199) 8 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-06T23:38:58.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:58 vm02 bash[17013]: cephadm 2026-03-06T22:38:57.312686+0000 mgr.vm02.opvwec (mgr.14199) 9 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-06T23:38:58.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:58 vm02 bash[17013]: cephadm 2026-03-06T22:38:57.312686+0000 mgr.vm02.opvwec (mgr.14199) 9 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-06T23:38:58.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:58 vm02 bash[17013]: cephadm 2026-03-06T22:38:57.346908+0000 mgr.vm02.opvwec (mgr.14199) 10 : cephadm [INF] Updating vm02:/var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/config/ceph.conf 2026-03-06T23:38:58.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:58 vm02 bash[17013]: cephadm 2026-03-06T22:38:57.346908+0000 mgr.vm02.opvwec (mgr.14199) 10 : cephadm [INF] Updating vm02:/var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/config/ceph.conf 2026-03-06T23:38:58.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:58 vm02 bash[17013]: cephadm 2026-03-06T22:38:57.348721+0000 mgr.vm02.opvwec (mgr.14199) 11 : cephadm [INF] Updating vm07:/var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/config/ceph.conf 2026-03-06T23:38:58.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:58 vm02 bash[17013]: cephadm 2026-03-06T22:38:57.348721+0000 mgr.vm02.opvwec (mgr.14199) 11 : cephadm [INF] Updating vm07:/var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/config/ceph.conf 2026-03-06T23:38:58.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:58 vm02 bash[17013]: cephadm 2026-03-06T22:38:57.383169+0000 mgr.vm02.opvwec (mgr.14199) 12 : cephadm [INF] Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-06T23:38:58.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:58 vm02 bash[17013]: cephadm 2026-03-06T22:38:57.383169+0000 mgr.vm02.opvwec (mgr.14199) 12 : cephadm [INF] Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-06T23:38:58.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:58 vm02 bash[17013]: cephadm 2026-03-06T22:38:57.389925+0000 mgr.vm02.opvwec (mgr.14199) 13 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-06T23:38:58.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:58 vm02 bash[17013]: cephadm 2026-03-06T22:38:57.389925+0000 mgr.vm02.opvwec (mgr.14199) 13 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-06T23:38:58.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:58 vm02 bash[17013]: cephadm 2026-03-06T22:38:57.419595+0000 mgr.vm02.opvwec (mgr.14199) 14 : cephadm [INF] Updating vm02:/var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/config/ceph.client.admin.keyring 2026-03-06T23:38:58.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:58 vm02 bash[17013]: cephadm 2026-03-06T22:38:57.419595+0000 mgr.vm02.opvwec (mgr.14199) 14 : cephadm [INF] Updating vm02:/var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/config/ceph.client.admin.keyring 2026-03-06T23:38:58.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:58 vm02 bash[17013]: cephadm 2026-03-06T22:38:57.429122+0000 mgr.vm02.opvwec (mgr.14199) 15 : cephadm [INF] Updating vm07:/var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/config/ceph.client.admin.keyring 2026-03-06T23:38:58.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:58 vm02 bash[17013]: cephadm 2026-03-06T22:38:57.429122+0000 mgr.vm02.opvwec (mgr.14199) 15 : cephadm [INF] Updating vm07:/var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/config/ceph.client.admin.keyring 2026-03-06T23:38:58.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:58 vm02 bash[17013]: cephadm 2026-03-06T22:38:57.478473+0000 mgr.vm02.opvwec (mgr.14199) 16 : cephadm [INF] Deploying daemon ceph-exporter.vm07 on vm07 2026-03-06T23:38:58.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:58 vm02 bash[17013]: cephadm 2026-03-06T22:38:57.478473+0000 mgr.vm02.opvwec (mgr.14199) 16 : cephadm [INF] Deploying daemon ceph-exporter.vm07 on vm07 2026-03-06T23:38:58.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:58 vm02 bash[17013]: audit 2026-03-06T22:38:57.993622+0000 mon.vm02 (mon.0) 203 : audit [DBG] from='client.? 192.168.123.107:0/2111360740' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-06T23:38:58.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:58 vm02 bash[17013]: audit 2026-03-06T22:38:57.993622+0000 mon.vm02 (mon.0) 203 : audit [DBG] from='client.? 192.168.123.107:0/2111360740' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-06T23:38:59.339 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-06T23:38:59.339 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph mon dump -f json 2026-03-06T23:39:00.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:59 vm02 bash[17013]: audit 2026-03-06T22:38:58.831709+0000 mon.vm02 (mon.0) 204 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:00.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:59 vm02 bash[17013]: audit 2026-03-06T22:38:58.831709+0000 mon.vm02 (mon.0) 204 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:00.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:59 vm02 bash[17013]: audit 2026-03-06T22:38:58.834022+0000 mon.vm02 (mon.0) 205 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:00.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:59 vm02 bash[17013]: audit 2026-03-06T22:38:58.834022+0000 mon.vm02 (mon.0) 205 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:00.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:59 vm02 bash[17013]: audit 2026-03-06T22:38:58.835982+0000 mon.vm02 (mon.0) 206 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:00.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:59 vm02 bash[17013]: audit 2026-03-06T22:38:58.835982+0000 mon.vm02 (mon.0) 206 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:00.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:59 vm02 bash[17013]: audit 2026-03-06T22:38:58.837793+0000 mon.vm02 (mon.0) 207 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:00.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:59 vm02 bash[17013]: audit 2026-03-06T22:38:58.837793+0000 mon.vm02 (mon.0) 207 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:00.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:59 vm02 bash[17013]: audit 2026-03-06T22:38:58.838690+0000 mon.vm02 (mon.0) 208 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm07", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-06T23:39:00.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:59 vm02 bash[17013]: audit 2026-03-06T22:38:58.838690+0000 mon.vm02 (mon.0) 208 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm07", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-06T23:39:00.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:59 vm02 bash[17013]: audit 2026-03-06T22:38:58.840272+0000 mon.vm02 (mon.0) 209 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.vm07", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished 2026-03-06T23:39:00.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:59 vm02 bash[17013]: audit 2026-03-06T22:38:58.840272+0000 mon.vm02 (mon.0) 209 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.vm07", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished 2026-03-06T23:39:00.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:59 vm02 bash[17013]: audit 2026-03-06T22:38:58.841321+0000 mon.vm02 (mon.0) 210 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:00.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:59 vm02 bash[17013]: audit 2026-03-06T22:38:58.841321+0000 mon.vm02 (mon.0) 210 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:00.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:59 vm02 bash[17013]: cephadm 2026-03-06T22:38:58.841988+0000 mgr.vm02.opvwec (mgr.14199) 17 : cephadm [INF] Deploying daemon crash.vm07 on vm07 2026-03-06T23:39:00.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:59 vm02 bash[17013]: cephadm 2026-03-06T22:38:58.841988+0000 mgr.vm02.opvwec (mgr.14199) 17 : cephadm [INF] Deploying daemon crash.vm07 on vm07 2026-03-06T23:39:00.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:59 vm02 bash[17013]: audit 2026-03-06T22:38:59.743958+0000 mon.vm02 (mon.0) 211 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:00.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:59 vm02 bash[17013]: audit 2026-03-06T22:38:59.743958+0000 mon.vm02 (mon.0) 211 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:00.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:59 vm02 bash[17013]: audit 2026-03-06T22:38:59.746065+0000 mon.vm02 (mon.0) 212 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:00.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:59 vm02 bash[17013]: audit 2026-03-06T22:38:59.746065+0000 mon.vm02 (mon.0) 212 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:00.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:59 vm02 bash[17013]: audit 2026-03-06T22:38:59.748113+0000 mon.vm02 (mon.0) 213 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:00.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:59 vm02 bash[17013]: audit 2026-03-06T22:38:59.748113+0000 mon.vm02 (mon.0) 213 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:00.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:59 vm02 bash[17013]: audit 2026-03-06T22:38:59.749758+0000 mon.vm02 (mon.0) 214 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:00.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:38:59 vm02 bash[17013]: audit 2026-03-06T22:38:59.749758+0000 mon.vm02 (mon.0) 214 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:01.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:01 vm02 bash[17013]: cephadm 2026-03-06T22:38:59.750471+0000 mgr.vm02.opvwec (mgr.14199) 18 : cephadm [INF] Deploying daemon node-exporter.vm07 on vm07 2026-03-06T23:39:01.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:01 vm02 bash[17013]: cephadm 2026-03-06T22:38:59.750471+0000 mgr.vm02.opvwec (mgr.14199) 18 : cephadm [INF] Deploying daemon node-exporter.vm07 on vm07 2026-03-06T23:39:01.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:01 vm02 bash[17013]: audit 2026-03-06T22:39:00.480936+0000 mon.vm02 (mon.0) 215 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:01.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:01 vm02 bash[17013]: audit 2026-03-06T22:39:00.480936+0000 mon.vm02 (mon.0) 215 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:01.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:01 vm02 bash[17013]: audit 2026-03-06T22:39:00.483673+0000 mon.vm02 (mon.0) 216 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:01.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:01 vm02 bash[17013]: audit 2026-03-06T22:39:00.483673+0000 mon.vm02 (mon.0) 216 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:01.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:01 vm02 bash[17013]: audit 2026-03-06T22:39:00.486240+0000 mon.vm02 (mon.0) 217 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:01.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:01 vm02 bash[17013]: audit 2026-03-06T22:39:00.486240+0000 mon.vm02 (mon.0) 217 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:01.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:01 vm02 bash[17013]: audit 2026-03-06T22:39:00.489021+0000 mon.vm02 (mon.0) 218 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:01.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:01 vm02 bash[17013]: audit 2026-03-06T22:39:00.489021+0000 mon.vm02 (mon.0) 218 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:01.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:01 vm02 bash[17013]: audit 2026-03-06T22:39:00.490399+0000 mon.vm02 (mon.0) 219 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm07.jbleen", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-06T23:39:01.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:01 vm02 bash[17013]: audit 2026-03-06T22:39:00.490399+0000 mon.vm02 (mon.0) 219 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm07.jbleen", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-06T23:39:01.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:01 vm02 bash[17013]: audit 2026-03-06T22:39:00.491663+0000 mon.vm02 (mon.0) 220 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.vm07.jbleen", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-06T23:39:01.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:01 vm02 bash[17013]: audit 2026-03-06T22:39:00.491663+0000 mon.vm02 (mon.0) 220 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.vm07.jbleen", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-06T23:39:01.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:01 vm02 bash[17013]: audit 2026-03-06T22:39:00.493054+0000 mon.vm02 (mon.0) 221 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-06T23:39:01.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:01 vm02 bash[17013]: audit 2026-03-06T22:39:00.493054+0000 mon.vm02 (mon.0) 221 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-06T23:39:01.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:01 vm02 bash[17013]: audit 2026-03-06T22:39:00.493563+0000 mon.vm02 (mon.0) 222 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:01.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:01 vm02 bash[17013]: audit 2026-03-06T22:39:00.493563+0000 mon.vm02 (mon.0) 222 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:01.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:01 vm02 bash[17013]: cephadm 2026-03-06T22:39:00.494040+0000 mgr.vm02.opvwec (mgr.14199) 19 : cephadm [INF] Deploying daemon mgr.vm07.jbleen on vm07 2026-03-06T23:39:01.744 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:01 vm02 bash[17013]: cephadm 2026-03-06T22:39:00.494040+0000 mgr.vm02.opvwec (mgr.14199) 19 : cephadm [INF] Deploying daemon mgr.vm07.jbleen on vm07 2026-03-06T23:39:01.744 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:01 vm02 bash[17013]: audit 2026-03-06T22:39:00.777305+0000 mon.vm02 (mon.0) 223 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:01.744 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:01 vm02 bash[17013]: audit 2026-03-06T22:39:00.777305+0000 mon.vm02 (mon.0) 223 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:01.744 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:01 vm02 bash[17013]: audit 2026-03-06T22:39:01.325433+0000 mon.vm02 (mon.0) 224 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:01.744 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:01 vm02 bash[17013]: audit 2026-03-06T22:39:01.325433+0000 mon.vm02 (mon.0) 224 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:01.744 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:01 vm02 bash[17013]: audit 2026-03-06T22:39:01.328232+0000 mon.vm02 (mon.0) 225 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:01.744 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:01 vm02 bash[17013]: audit 2026-03-06T22:39:01.328232+0000 mon.vm02 (mon.0) 225 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:01.744 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:01 vm02 bash[17013]: audit 2026-03-06T22:39:01.331126+0000 mon.vm02 (mon.0) 226 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:01.744 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:01 vm02 bash[17013]: audit 2026-03-06T22:39:01.331126+0000 mon.vm02 (mon.0) 226 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:01.744 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:01 vm02 bash[17013]: audit 2026-03-06T22:39:01.333603+0000 mon.vm02 (mon.0) 227 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:01.744 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:01 vm02 bash[17013]: audit 2026-03-06T22:39:01.333603+0000 mon.vm02 (mon.0) 227 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:01.744 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:01 vm02 bash[17013]: audit 2026-03-06T22:39:01.334949+0000 mon.vm02 (mon.0) 228 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-06T23:39:01.744 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:01 vm02 bash[17013]: audit 2026-03-06T22:39:01.334949+0000 mon.vm02 (mon.0) 228 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-06T23:39:01.744 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:01 vm02 bash[17013]: audit 2026-03-06T22:39:01.335457+0000 mon.vm02 (mon.0) 229 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:01.744 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:01 vm02 bash[17013]: audit 2026-03-06T22:39:01.335457+0000 mon.vm02 (mon.0) 229 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:02.459 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 systemd[1]: Started Ceph mon.vm07 for f8b8c16a-19ac-11f1-87e7-9b7402b99c44. 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 0 ceph version 19.2.3-39-g340d3c24fc6 (340d3c24fc6ae7529322dc7ccee6c6cb2589da0a) squid (stable), process ceph-mon, pid 6 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 0 pidfile_write: ignore empty --pid-file 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 0 load: jerasure load: lrc 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: RocksDB version: 7.9.2 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Git sha 0 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Compile date 2026-03-06 13:52:12 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: DB SUMMARY 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: DB Session ID: 09CZJV4EUCZHOQQSNFTJ 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: CURRENT file: CURRENT 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-vm07/store.db dir, Total Num: 0, files: 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-vm07/store.db: 000004.log size: 511 ; 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.error_if_exists: 0 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.create_if_missing: 0 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.paranoid_checks: 1 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.env: 0x55682dc5cca0 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.info_log: 0x55683d6e9820 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.statistics: (nil) 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.use_fsync: 0 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.max_log_file_size: 0 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.allow_fallocate: 1 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.use_direct_reads: 0 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.db_log_dir: 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.wal_dir: 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-06T23:39:02.733 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.write_buffer_manager: 0x55683d6ed900 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.unordered_write: 0 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.row_cache: None 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.wal_filter: None 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.two_write_queues: 0 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.wal_compression: 0 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.atomic_flush: 0 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.log_readahead_size: 0 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.max_background_jobs: 2 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.max_background_compactions: -1 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.max_subcompactions: 1 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.max_open_files: -1 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.max_background_flushes: -1 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Compression algorithms supported: 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: kZSTD supported: 0 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: kXpressCompression supported: 0 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: kBZip2Compression supported: 0 2026-03-06T23:39:02.734 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: kLZ4Compression supported: 1 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: kZlibCompression supported: 1 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: kSnappyCompression supported: 1 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-vm07/store.db/MANIFEST-000005 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.merge_operator: 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.compaction_filter: None 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55683d6e8280) 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: cache_index_and_filter_blocks: 1 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: pin_top_level_index_and_filter: 1 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: index_type: 0 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: data_block_index_type: 0 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: index_shortening: 1 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: data_block_hash_table_util_ratio: 0.750000 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: checksum: 4 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: no_block_cache: 0 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: block_cache: 0x55683d70f1f0 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: block_cache_name: BinnedLRUCache 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: block_cache_options: 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: capacity : 536870912 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: num_shard_bits : 4 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: strict_capacity_limit : 0 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: high_pri_pool_ratio: 0.000 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: block_cache_compressed: (nil) 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: persistent_cache: (nil) 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: block_size: 4096 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: block_size_deviation: 10 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: block_restart_interval: 16 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: index_block_restart_interval: 1 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: metadata_block_size: 4096 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: partition_filters: 0 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: use_delta_encoding: 1 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: filter_policy: bloomfilter 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: whole_key_filtering: 1 2026-03-06T23:39:02.735 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: verify_compression: 0 2026-03-06T23:39:02.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: read_amp_bytes_per_bit: 0 2026-03-06T23:39:02.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: format_version: 5 2026-03-06T23:39:02.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: enable_index_compression: 1 2026-03-06T23:39:02.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: block_align: 0 2026-03-06T23:39:02.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: max_auto_readahead_size: 262144 2026-03-06T23:39:02.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: prepopulate_block_cache: 0 2026-03-06T23:39:02.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: initial_auto_readahead_size: 8192 2026-03-06T23:39:02.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: num_file_reads_for_auto_readahead: 2 2026-03-06T23:39:02.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-06T23:39:02.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-06T23:39:02.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.compression: NoCompression 2026-03-06T23:39:02.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-06T23:39:02.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-06T23:39:02.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-06T23:39:02.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.num_levels: 7 2026-03-06T23:39:02.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-06T23:39:02.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-06T23:39:02.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-06T23:39:02.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-06T23:39:02.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-06T23:39:02.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-06T23:39:02.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-06T23:39:02.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-06T23:39:02.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-06T23:39:02.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-06T23:39:02.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-06T23:39:02.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-06T23:39:02.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-06T23:39:02.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-06T23:39:02.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-06T23:39:02.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-06T23:39:02.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-06T23:39:02.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-06T23:39:02.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-06T23:39:02.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-06T23:39:02.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-06T23:39:02.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-06T23:39:02.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-06T23:39:02.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-06T23:39:02.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-06T23:39:02.736 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-06T23:39:02.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-06T23:39:02.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-06T23:39:02.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-06T23:39:02.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-06T23:39:02.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-06T23:39:02.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-06T23:39:02.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-06T23:39:02.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-06T23:39:02.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-06T23:39:02.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-06T23:39:02.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-06T23:39:02.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-06T23:39:02.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-06T23:39:02.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-06T23:39:02.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-06T23:39:02.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-06T23:39:02.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-06T23:39:02.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-06T23:39:02.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-06T23:39:02.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-06T23:39:02.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-06T23:39:02.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-06T23:39:02.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-06T23:39:02.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-06T23:39:02.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-06T23:39:02.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-06T23:39:02.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-06T23:39:02.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-06T23:39:02.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.inplace_update_support: 0 2026-03-06T23:39:02.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-06T23:39:02.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-06T23:39:02.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-06T23:39:02.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-06T23:39:02.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.bloom_locality: 0 2026-03-06T23:39:02.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.max_successive_merges: 0 2026-03-06T23:39:02.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-06T23:39:02.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-06T23:39:02.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-06T23:39:02.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-06T23:39:02.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.ttl: 2592000 2026-03-06T23:39:02.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-06T23:39:02.737 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.enable_blob_files: false 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.min_blob_size: 0 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.577+0000 7fb26299bd80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.585+0000 7fb26299bd80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-vm07/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.585+0000 7fb26299bd80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.585+0000 7fb26299bd80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3ecb7f4a-da5c-45b5-a7e4-d0260a3734de 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.585+0000 7fb26299bd80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1772836742589265, "job": 1, "event": "recovery_started", "wal_files": [4]} 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.585+0000 7fb26299bd80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.585+0000 7fb26299bd80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1772836742590369, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1643, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772836742, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3ecb7f4a-da5c-45b5-a7e4-d0260a3734de", "db_session_id": "09CZJV4EUCZHOQQSNFTJ", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.585+0000 7fb26299bd80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1772836742590470, "job": 1, "event": "recovery_finished"} 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.585+0000 7fb26299bd80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 10 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.589+0000 7fb26299bd80 4 rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-vm07/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.589+0000 7fb26299bd80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55683d710e00 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.589+0000 7fb26299bd80 4 rocksdb: DB pointer 0x55683d820000 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.589+0000 7fb26299bd80 0 mon.vm07 does not exist in monmap, will attempt to join an existing cluster 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.589+0000 7fb26299bd80 0 using public_addr v2:192.168.123.107:0/0 -> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.589+0000 7fb26299bd80 0 starting mon.vm07 rank -1 at public addrs [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] at bind addrs [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon_data /var/lib/ceph/mon/ceph-vm07 fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.589+0000 7fb26299bd80 1 mon.vm07@-1(???) e0 preinit fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.593+0000 7fb258765640 4 rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.593+0000 7fb258765640 4 rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: ** DB Stats ** 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: ** Compaction Stats [default] ** 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: L0 1/0 1.60 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 1.5 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: Sum 1/0 1.60 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 1.5 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 1.5 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: ** Compaction Stats [default] ** 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.5 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: Flush(GB): cumulative 0.000, interval 0.000 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: AddFile(Total Files): cumulative 0, interval 0 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: AddFile(L0 Files): cumulative 0, interval 0 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: AddFile(Keys): cumulative 0, interval 0 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: Cumulative compaction: 0.00 GB write, 0.12 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: Interval compaction: 0.00 GB write, 0.12 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: Block cache BinnedLRUCache@0x55683d70f1f0#6 capacity: 512.00 MB usage: 0.86 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 7e-06 secs_since: 0 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: Block cache entry stats(count,size,portion): DataBlock(1,0.64 KB,0.00012219%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%) 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: ** File Read Latency Histogram By Level [default] ** 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.669+0000 7fb25b76b640 0 mon.vm07@-1(synchronizing).mds e1 new map 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.669+0000 7fb25b76b640 0 mon.vm07@-1(synchronizing).mds e1 print_map 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: e1 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: btime 2026-03-06T22:37:19:212552+0000 2026-03-06T23:39:02.738 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: enable_multiple, ever_enabled_multiple: 1,1 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: legacy client fscid: -1 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: No filesystems configured 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.669+0000 7fb25b76b640 1 mon.vm07@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.669+0000 7fb25b76b640 1 mon.vm07@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.669+0000 7fb25b76b640 1 mon.vm07@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.669+0000 7fb25b76b640 1 mon.vm07@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.669+0000 7fb25b76b640 1 mon.vm07@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.669+0000 7fb25b76b640 1 mon.vm07@-1(synchronizing).osd e4 e4: 0 total, 0 up, 0 in 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.669+0000 7fb25b76b640 1 mon.vm07@-1(synchronizing).osd e5 e5: 0 total, 0 up, 0 in 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.673+0000 7fb25b76b640 0 mon.vm07@-1(synchronizing).osd e5 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.673+0000 7fb25b76b640 0 mon.vm07@-1(synchronizing).osd e5 crush map has features 288514050185494528, adjusting msgr requires 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.673+0000 7fb25b76b640 0 mon.vm07@-1(synchronizing).osd e5 crush map has features 288514050185494528, adjusting msgr requires 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.673+0000 7fb25b76b640 0 mon.vm07@-1(synchronizing).osd e5 crush map has features 288514050185494528, adjusting msgr requires 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: cluster 2026-03-06T22:38:52.797188+0000 mon.vm02 (mon.0) 179 : cluster [DBG] mgrmap e17: vm02.opvwec(active, since 2s) 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: cluster 2026-03-06T22:38:52.797188+0000 mon.vm02 (mon.0) 179 : cluster [DBG] mgrmap e17: vm02.opvwec(active, since 2s) 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:52.895703+0000 mon.vm02 (mon.0) 180 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:52.895703+0000 mon.vm02 (mon.0) 180 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:53.473160+0000 mon.vm02 (mon.0) 181 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:53.473160+0000 mon.vm02 (mon.0) 181 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:54.562323+0000 mon.vm02 (mon.0) 182 : audit [DBG] from='client.? 192.168.123.107:0/2244693233' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:54.562323+0000 mon.vm02 (mon.0) 182 : audit [DBG] from='client.? 192.168.123.107:0/2244693233' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:56.551308+0000 mon.vm02 (mon.0) 183 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:56.551308+0000 mon.vm02 (mon.0) 183 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:56.553982+0000 mon.vm02 (mon.0) 184 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:56.553982+0000 mon.vm02 (mon.0) 184 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:56.556321+0000 mon.vm02 (mon.0) 185 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:56.556321+0000 mon.vm02 (mon.0) 185 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:56.558017+0000 mon.vm02 (mon.0) 186 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:56.558017+0000 mon.vm02 (mon.0) 186 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:56.558546+0000 mon.vm02 (mon.0) 187 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:56.558546+0000 mon.vm02 (mon.0) 187 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:56.727358+0000 mon.vm02 (mon.0) 188 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:56.727358+0000 mon.vm02 (mon.0) 188 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:56.730146+0000 mon.vm02 (mon.0) 189 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:56.730146+0000 mon.vm02 (mon.0) 189 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:57.307314+0000 mon.vm02 (mon.0) 190 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:57.307314+0000 mon.vm02 (mon.0) 190 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:57.310209+0000 mon.vm02 (mon.0) 191 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:57.310209+0000 mon.vm02 (mon.0) 191 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:57.310981+0000 mon.vm02 (mon.0) 192 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:57.310981+0000 mon.vm02 (mon.0) 192 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:57.311583+0000 mon.vm02 (mon.0) 193 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:57.311583+0000 mon.vm02 (mon.0) 193 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:57.311948+0000 mon.vm02 (mon.0) 194 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:57.311948+0000 mon.vm02 (mon.0) 194 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:57.460493+0000 mon.vm02 (mon.0) 195 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:57.460493+0000 mon.vm02 (mon.0) 195 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:57.463277+0000 mon.vm02 (mon.0) 196 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:57.463277+0000 mon.vm02 (mon.0) 196 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:57.471684+0000 mon.vm02 (mon.0) 197 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:57.471684+0000 mon.vm02 (mon.0) 197 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:57.473418+0000 mon.vm02 (mon.0) 198 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:57.473418+0000 mon.vm02 (mon.0) 198 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:57.475424+0000 mon.vm02 (mon.0) 199 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:57.475424+0000 mon.vm02 (mon.0) 199 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:57.476188+0000 mon.vm02 (mon.0) 200 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm07", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:57.476188+0000 mon.vm02 (mon.0) 200 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm07", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:57.477104+0000 mon.vm02 (mon.0) 201 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd='[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm07", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]': finished 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:57.477104+0000 mon.vm02 (mon.0) 201 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd='[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm07", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]': finished 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:57.478024+0000 mon.vm02 (mon.0) 202 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:57.478024+0000 mon.vm02 (mon.0) 202 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: cephadm 2026-03-06T22:38:57.312534+0000 mgr.vm02.opvwec (mgr.14199) 8 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: cephadm 2026-03-06T22:38:57.312534+0000 mgr.vm02.opvwec (mgr.14199) 8 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-06T23:39:02.739 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: cephadm 2026-03-06T22:38:57.312686+0000 mgr.vm02.opvwec (mgr.14199) 9 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: cephadm 2026-03-06T22:38:57.312686+0000 mgr.vm02.opvwec (mgr.14199) 9 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: cephadm 2026-03-06T22:38:57.346908+0000 mgr.vm02.opvwec (mgr.14199) 10 : cephadm [INF] Updating vm02:/var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/config/ceph.conf 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: cephadm 2026-03-06T22:38:57.346908+0000 mgr.vm02.opvwec (mgr.14199) 10 : cephadm [INF] Updating vm02:/var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/config/ceph.conf 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: cephadm 2026-03-06T22:38:57.348721+0000 mgr.vm02.opvwec (mgr.14199) 11 : cephadm [INF] Updating vm07:/var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/config/ceph.conf 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: cephadm 2026-03-06T22:38:57.348721+0000 mgr.vm02.opvwec (mgr.14199) 11 : cephadm [INF] Updating vm07:/var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/config/ceph.conf 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: cephadm 2026-03-06T22:38:57.383169+0000 mgr.vm02.opvwec (mgr.14199) 12 : cephadm [INF] Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: cephadm 2026-03-06T22:38:57.383169+0000 mgr.vm02.opvwec (mgr.14199) 12 : cephadm [INF] Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: cephadm 2026-03-06T22:38:57.389925+0000 mgr.vm02.opvwec (mgr.14199) 13 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: cephadm 2026-03-06T22:38:57.389925+0000 mgr.vm02.opvwec (mgr.14199) 13 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: cephadm 2026-03-06T22:38:57.419595+0000 mgr.vm02.opvwec (mgr.14199) 14 : cephadm [INF] Updating vm02:/var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/config/ceph.client.admin.keyring 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: cephadm 2026-03-06T22:38:57.419595+0000 mgr.vm02.opvwec (mgr.14199) 14 : cephadm [INF] Updating vm02:/var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/config/ceph.client.admin.keyring 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: cephadm 2026-03-06T22:38:57.429122+0000 mgr.vm02.opvwec (mgr.14199) 15 : cephadm [INF] Updating vm07:/var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/config/ceph.client.admin.keyring 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: cephadm 2026-03-06T22:38:57.429122+0000 mgr.vm02.opvwec (mgr.14199) 15 : cephadm [INF] Updating vm07:/var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/config/ceph.client.admin.keyring 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: cephadm 2026-03-06T22:38:57.478473+0000 mgr.vm02.opvwec (mgr.14199) 16 : cephadm [INF] Deploying daemon ceph-exporter.vm07 on vm07 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: cephadm 2026-03-06T22:38:57.478473+0000 mgr.vm02.opvwec (mgr.14199) 16 : cephadm [INF] Deploying daemon ceph-exporter.vm07 on vm07 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:57.993622+0000 mon.vm02 (mon.0) 203 : audit [DBG] from='client.? 192.168.123.107:0/2111360740' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:57.993622+0000 mon.vm02 (mon.0) 203 : audit [DBG] from='client.? 192.168.123.107:0/2111360740' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:58.831709+0000 mon.vm02 (mon.0) 204 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:58.831709+0000 mon.vm02 (mon.0) 204 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:58.834022+0000 mon.vm02 (mon.0) 205 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:58.834022+0000 mon.vm02 (mon.0) 205 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:58.835982+0000 mon.vm02 (mon.0) 206 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:58.835982+0000 mon.vm02 (mon.0) 206 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:58.837793+0000 mon.vm02 (mon.0) 207 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:58.837793+0000 mon.vm02 (mon.0) 207 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:58.838690+0000 mon.vm02 (mon.0) 208 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm07", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:58.838690+0000 mon.vm02 (mon.0) 208 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm07", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:58.840272+0000 mon.vm02 (mon.0) 209 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.vm07", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:58.840272+0000 mon.vm02 (mon.0) 209 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.vm07", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:58.841321+0000 mon.vm02 (mon.0) 210 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:58.841321+0000 mon.vm02 (mon.0) 210 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: cephadm 2026-03-06T22:38:58.841988+0000 mgr.vm02.opvwec (mgr.14199) 17 : cephadm [INF] Deploying daemon crash.vm07 on vm07 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: cephadm 2026-03-06T22:38:58.841988+0000 mgr.vm02.opvwec (mgr.14199) 17 : cephadm [INF] Deploying daemon crash.vm07 on vm07 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:59.743958+0000 mon.vm02 (mon.0) 211 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:59.743958+0000 mon.vm02 (mon.0) 211 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:59.746065+0000 mon.vm02 (mon.0) 212 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:59.746065+0000 mon.vm02 (mon.0) 212 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:59.748113+0000 mon.vm02 (mon.0) 213 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:59.748113+0000 mon.vm02 (mon.0) 213 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:59.749758+0000 mon.vm02 (mon.0) 214 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:38:59.749758+0000 mon.vm02 (mon.0) 214 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: cephadm 2026-03-06T22:38:59.750471+0000 mgr.vm02.opvwec (mgr.14199) 18 : cephadm [INF] Deploying daemon node-exporter.vm07 on vm07 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: cephadm 2026-03-06T22:38:59.750471+0000 mgr.vm02.opvwec (mgr.14199) 18 : cephadm [INF] Deploying daemon node-exporter.vm07 on vm07 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:39:00.480936+0000 mon.vm02 (mon.0) 215 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:39:00.480936+0000 mon.vm02 (mon.0) 215 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:39:00.483673+0000 mon.vm02 (mon.0) 216 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:39:00.483673+0000 mon.vm02 (mon.0) 216 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:39:00.486240+0000 mon.vm02 (mon.0) 217 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:39:00.486240+0000 mon.vm02 (mon.0) 217 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:39:00.489021+0000 mon.vm02 (mon.0) 218 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:39:00.489021+0000 mon.vm02 (mon.0) 218 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:39:00.490399+0000 mon.vm02 (mon.0) 219 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm07.jbleen", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:39:00.490399+0000 mon.vm02 (mon.0) 219 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm07.jbleen", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:39:00.491663+0000 mon.vm02 (mon.0) 220 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.vm07.jbleen", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-06T23:39:02.740 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:39:00.491663+0000 mon.vm02 (mon.0) 220 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.vm07.jbleen", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-06T23:39:02.741 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:39:00.493054+0000 mon.vm02 (mon.0) 221 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-06T23:39:02.741 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:39:00.493054+0000 mon.vm02 (mon.0) 221 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-06T23:39:02.741 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:39:00.493563+0000 mon.vm02 (mon.0) 222 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:02.741 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:39:00.493563+0000 mon.vm02 (mon.0) 222 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:02.741 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: cephadm 2026-03-06T22:39:00.494040+0000 mgr.vm02.opvwec (mgr.14199) 19 : cephadm [INF] Deploying daemon mgr.vm07.jbleen on vm07 2026-03-06T23:39:02.741 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: cephadm 2026-03-06T22:39:00.494040+0000 mgr.vm02.opvwec (mgr.14199) 19 : cephadm [INF] Deploying daemon mgr.vm07.jbleen on vm07 2026-03-06T23:39:02.741 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:39:00.777305+0000 mon.vm02 (mon.0) 223 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.741 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:39:00.777305+0000 mon.vm02 (mon.0) 223 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.741 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:39:01.325433+0000 mon.vm02 (mon.0) 224 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.741 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:39:01.325433+0000 mon.vm02 (mon.0) 224 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.741 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:39:01.328232+0000 mon.vm02 (mon.0) 225 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.741 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:39:01.328232+0000 mon.vm02 (mon.0) 225 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.741 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:39:01.331126+0000 mon.vm02 (mon.0) 226 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.741 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:39:01.331126+0000 mon.vm02 (mon.0) 226 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.741 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:39:01.333603+0000 mon.vm02 (mon.0) 227 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.741 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:39:01.333603+0000 mon.vm02 (mon.0) 227 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.741 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:39:01.334949+0000 mon.vm02 (mon.0) 228 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-06T23:39:02.741 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:39:01.334949+0000 mon.vm02 (mon.0) 228 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-06T23:39:02.741 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:39:01.335457+0000 mon.vm02 (mon.0) 229 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:02.741 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:39:01.335457+0000 mon.vm02 (mon.0) 229 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:02.741 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: cephadm 2026-03-06T22:39:01.335980+0000 mgr.vm02.opvwec (mgr.14199) 20 : cephadm [INF] Deploying daemon mon.vm07 on vm07 2026-03-06T23:39:02.741 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: cephadm 2026-03-06T22:39:01.335980+0000 mgr.vm02.opvwec (mgr.14199) 20 : cephadm [INF] Deploying daemon mon.vm07 on vm07 2026-03-06T23:39:02.741 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:39:02.373749+0000 mon.vm02 (mon.0) 230 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.741 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:39:02.373749+0000 mon.vm02 (mon.0) 230 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.741 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:39:02.377947+0000 mon.vm02 (mon.0) 231 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.741 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:39:02.377947+0000 mon.vm02 (mon.0) 231 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.741 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:39:02.381057+0000 mon.vm02 (mon.0) 232 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.741 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:39:02.381057+0000 mon.vm02 (mon.0) 232 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.741 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:39:02.384056+0000 mon.vm02 (mon.0) 233 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.741 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:39:02.384056+0000 mon.vm02 (mon.0) 233 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.741 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:39:02.387830+0000 mon.vm02 (mon.0) 234 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.741 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:39:02.387830+0000 mon.vm02 (mon.0) 234 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.741 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:39:02.397947+0000 mon.vm02 (mon.0) 235 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:39:02.741 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: audit 2026-03-06T22:39:02.397947+0000 mon.vm02 (mon.0) 235 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:39:02.741 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.689+0000 7fb25b76b640 1 mon.vm07@-1(synchronizing).paxosservice(auth 1..8) refresh upgraded, format 0 -> 3 2026-03-06T23:39:02.741 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.689+0000 7fb25b76b640 4 mon.vm07@-1(synchronizing).mgr e0 loading version 17 2026-03-06T23:39:02.741 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.689+0000 7fb25b76b640 4 mon.vm07@-1(synchronizing).mgr e17 active server: [v2:192.168.123.102:6800/2981991092,v1:192.168.123.102:6801/2981991092](14199) 2026-03-06T23:39:02.741 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:02 vm07 bash[20848]: debug 2026-03-06T22:39:02.689+0000 7fb25b76b640 4 mon.vm07@-1(synchronizing).mgr e17 mkfs or daemon transitioned to available, loading commands 2026-03-06T23:39:02.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:02 vm02 bash[17013]: cephadm 2026-03-06T22:39:01.335980+0000 mgr.vm02.opvwec (mgr.14199) 20 : cephadm [INF] Deploying daemon mon.vm07 on vm07 2026-03-06T23:39:02.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:02 vm02 bash[17013]: cephadm 2026-03-06T22:39:01.335980+0000 mgr.vm02.opvwec (mgr.14199) 20 : cephadm [INF] Deploying daemon mon.vm07 on vm07 2026-03-06T23:39:02.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:02 vm02 bash[17013]: audit 2026-03-06T22:39:02.373749+0000 mon.vm02 (mon.0) 230 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:02 vm02 bash[17013]: audit 2026-03-06T22:39:02.373749+0000 mon.vm02 (mon.0) 230 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:02 vm02 bash[17013]: audit 2026-03-06T22:39:02.377947+0000 mon.vm02 (mon.0) 231 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:02 vm02 bash[17013]: audit 2026-03-06T22:39:02.377947+0000 mon.vm02 (mon.0) 231 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:02 vm02 bash[17013]: audit 2026-03-06T22:39:02.381057+0000 mon.vm02 (mon.0) 232 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:02 vm02 bash[17013]: audit 2026-03-06T22:39:02.381057+0000 mon.vm02 (mon.0) 232 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:02 vm02 bash[17013]: audit 2026-03-06T22:39:02.384056+0000 mon.vm02 (mon.0) 233 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:02 vm02 bash[17013]: audit 2026-03-06T22:39:02.384056+0000 mon.vm02 (mon.0) 233 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:02 vm02 bash[17013]: audit 2026-03-06T22:39:02.387830+0000 mon.vm02 (mon.0) 234 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:02 vm02 bash[17013]: audit 2026-03-06T22:39:02.387830+0000 mon.vm02 (mon.0) 234 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:02.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:02 vm02 bash[17013]: audit 2026-03-06T22:39:02.397947+0000 mon.vm02 (mon.0) 235 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:39:02.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:02 vm02 bash[17013]: audit 2026-03-06T22:39:02.397947+0000 mon.vm02 (mon.0) 235 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:39:07.143 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm07/config 2026-03-06T23:39:07.945 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-06T23:39:07.945 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":2,"fsid":"f8b8c16a-19ac-11f1-87e7-9b7402b99c44","modified":"2026-03-06T22:39:02.693988Z","created":"2026-03-06T22:37:18.048883Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm02","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:3300","nonce":0},{"type":"v1","addr":"192.168.123.102:6789","nonce":0}]},"addr":"192.168.123.102:6789/0","public_addr":"192.168.123.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"vm07","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:3300","nonce":0},{"type":"v1","addr":"192.168.123.107:6789","nonce":0}]},"addr":"192.168.123.107:6789/0","public_addr":"192.168.123.107:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-06T23:39:07.945 INFO:teuthology.orchestra.run.vm07.stderr:dumped monmap epoch 2 2026-03-06T23:39:08.031 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: audit 2026-03-06T22:39:02.697259+0000 mon.vm02 (mon.0) 237 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm02"}]: dispatch 2026-03-06T23:39:08.031 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: audit 2026-03-06T22:39:02.697259+0000 mon.vm02 (mon.0) 237 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm02"}]: dispatch 2026-03-06T23:39:08.031 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: audit 2026-03-06T22:39:02.697354+0000 mon.vm02 (mon.0) 238 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-06T23:39:08.031 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: audit 2026-03-06T22:39:02.697354+0000 mon.vm02 (mon.0) 238 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-06T23:39:08.031 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: cluster 2026-03-06T22:39:02.697432+0000 mon.vm02 (mon.0) 239 : cluster [INF] mon.vm02 calling monitor election 2026-03-06T23:39:08.031 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: cluster 2026-03-06T22:39:02.697432+0000 mon.vm02 (mon.0) 239 : cluster [INF] mon.vm02 calling monitor election 2026-03-06T23:39:08.031 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: audit 2026-03-06T22:39:03.692844+0000 mon.vm02 (mon.0) 240 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-06T23:39:08.031 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: audit 2026-03-06T22:39:03.692844+0000 mon.vm02 (mon.0) 240 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-06T23:39:08.031 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: audit 2026-03-06T22:39:04.693076+0000 mon.vm02 (mon.0) 241 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-06T23:39:08.031 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: audit 2026-03-06T22:39:04.693076+0000 mon.vm02 (mon.0) 241 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-06T23:39:08.031 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: cluster 2026-03-06T22:39:04.698802+0000 mon.vm07 (mon.1) 1 : cluster [INF] mon.vm07 calling monitor election 2026-03-06T23:39:08.031 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: cluster 2026-03-06T22:39:04.698802+0000 mon.vm07 (mon.1) 1 : cluster [INF] mon.vm07 calling monitor election 2026-03-06T23:39:08.031 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: audit 2026-03-06T22:39:05.692986+0000 mon.vm02 (mon.0) 242 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: audit 2026-03-06T22:39:05.692986+0000 mon.vm02 (mon.0) 242 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: audit 2026-03-06T22:39:05.798495+0000 mon.vm02 (mon.0) 243 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: audit 2026-03-06T22:39:05.798495+0000 mon.vm02 (mon.0) 243 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: audit 2026-03-06T22:39:06.693438+0000 mon.vm02 (mon.0) 244 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: audit 2026-03-06T22:39:06.693438+0000 mon.vm02 (mon.0) 244 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: audit 2026-03-06T22:39:07.693184+0000 mon.vm02 (mon.0) 245 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: audit 2026-03-06T22:39:07.693184+0000 mon.vm02 (mon.0) 245 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: cluster 2026-03-06T22:39:07.702792+0000 mon.vm02 (mon.0) 246 : cluster [INF] mon.vm02 is new leader, mons vm02,vm07 in quorum (ranks 0,1) 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: cluster 2026-03-06T22:39:07.702792+0000 mon.vm02 (mon.0) 246 : cluster [INF] mon.vm02 is new leader, mons vm02,vm07 in quorum (ranks 0,1) 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: cluster 2026-03-06T22:39:07.707359+0000 mon.vm02 (mon.0) 247 : cluster [DBG] monmap epoch 2 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: cluster 2026-03-06T22:39:07.707359+0000 mon.vm02 (mon.0) 247 : cluster [DBG] monmap epoch 2 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: cluster 2026-03-06T22:39:07.707403+0000 mon.vm02 (mon.0) 248 : cluster [DBG] fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: cluster 2026-03-06T22:39:07.707403+0000 mon.vm02 (mon.0) 248 : cluster [DBG] fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: cluster 2026-03-06T22:39:07.707444+0000 mon.vm02 (mon.0) 249 : cluster [DBG] last_changed 2026-03-06T22:39:02.693988+0000 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: cluster 2026-03-06T22:39:07.707444+0000 mon.vm02 (mon.0) 249 : cluster [DBG] last_changed 2026-03-06T22:39:02.693988+0000 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: cluster 2026-03-06T22:39:07.707485+0000 mon.vm02 (mon.0) 250 : cluster [DBG] created 2026-03-06T22:37:18.048883+0000 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: cluster 2026-03-06T22:39:07.707485+0000 mon.vm02 (mon.0) 250 : cluster [DBG] created 2026-03-06T22:37:18.048883+0000 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: cluster 2026-03-06T22:39:07.707525+0000 mon.vm02 (mon.0) 251 : cluster [DBG] min_mon_release 19 (squid) 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: cluster 2026-03-06T22:39:07.707525+0000 mon.vm02 (mon.0) 251 : cluster [DBG] min_mon_release 19 (squid) 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: cluster 2026-03-06T22:39:07.707566+0000 mon.vm02 (mon.0) 252 : cluster [DBG] election_strategy: 1 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: cluster 2026-03-06T22:39:07.707566+0000 mon.vm02 (mon.0) 252 : cluster [DBG] election_strategy: 1 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: cluster 2026-03-06T22:39:07.707610+0000 mon.vm02 (mon.0) 253 : cluster [DBG] 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.vm02 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: cluster 2026-03-06T22:39:07.707610+0000 mon.vm02 (mon.0) 253 : cluster [DBG] 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.vm02 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: cluster 2026-03-06T22:39:07.707651+0000 mon.vm02 (mon.0) 254 : cluster [DBG] 1: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.vm07 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: cluster 2026-03-06T22:39:07.707651+0000 mon.vm02 (mon.0) 254 : cluster [DBG] 1: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.vm07 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: cluster 2026-03-06T22:39:07.708048+0000 mon.vm02 (mon.0) 255 : cluster [DBG] fsmap 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: cluster 2026-03-06T22:39:07.708048+0000 mon.vm02 (mon.0) 255 : cluster [DBG] fsmap 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: cluster 2026-03-06T22:39:07.708132+0000 mon.vm02 (mon.0) 256 : cluster [DBG] osdmap e5: 0 total, 0 up, 0 in 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: cluster 2026-03-06T22:39:07.708132+0000 mon.vm02 (mon.0) 256 : cluster [DBG] osdmap e5: 0 total, 0 up, 0 in 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: cluster 2026-03-06T22:39:07.708364+0000 mon.vm02 (mon.0) 257 : cluster [DBG] mgrmap e17: vm02.opvwec(active, since 16s) 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: cluster 2026-03-06T22:39:07.708364+0000 mon.vm02 (mon.0) 257 : cluster [DBG] mgrmap e17: vm02.opvwec(active, since 16s) 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: cluster 2026-03-06T22:39:07.708625+0000 mon.vm02 (mon.0) 258 : cluster [INF] overall HEALTH_OK 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: cluster 2026-03-06T22:39:07.708625+0000 mon.vm02 (mon.0) 258 : cluster [INF] overall HEALTH_OK 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: audit 2026-03-06T22:39:07.714147+0000 mon.vm02 (mon.0) 259 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: audit 2026-03-06T22:39:07.714147+0000 mon.vm02 (mon.0) 259 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: audit 2026-03-06T22:39:07.718780+0000 mon.vm02 (mon.0) 260 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: audit 2026-03-06T22:39:07.718780+0000 mon.vm02 (mon.0) 260 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: audit 2026-03-06T22:39:07.723056+0000 mon.vm02 (mon.0) 261 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: audit 2026-03-06T22:39:07.723056+0000 mon.vm02 (mon.0) 261 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: audit 2026-03-06T22:39:07.723934+0000 mon.vm02 (mon.0) 262 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: audit 2026-03-06T22:39:07.723934+0000 mon.vm02 (mon.0) 262 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: audit 2026-03-06T22:39:07.724891+0000 mon.vm02 (mon.0) 263 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T23:39:08.032 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:07 vm07 bash[20848]: audit 2026-03-06T22:39:07.724891+0000 mon.vm02 (mon.0) 263 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T23:39:08.032 INFO:tasks.cephadm:Generating final ceph.conf file... 2026-03-06T23:39:08.033 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph config generate-minimal-conf 2026-03-06T23:39:08.039 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: audit 2026-03-06T22:39:02.697259+0000 mon.vm02 (mon.0) 237 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm02"}]: dispatch 2026-03-06T23:39:08.039 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: audit 2026-03-06T22:39:02.697259+0000 mon.vm02 (mon.0) 237 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm02"}]: dispatch 2026-03-06T23:39:08.039 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: audit 2026-03-06T22:39:02.697354+0000 mon.vm02 (mon.0) 238 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-06T23:39:08.039 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: audit 2026-03-06T22:39:02.697354+0000 mon.vm02 (mon.0) 238 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-06T23:39:08.039 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: cluster 2026-03-06T22:39:02.697432+0000 mon.vm02 (mon.0) 239 : cluster [INF] mon.vm02 calling monitor election 2026-03-06T23:39:08.039 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: cluster 2026-03-06T22:39:02.697432+0000 mon.vm02 (mon.0) 239 : cluster [INF] mon.vm02 calling monitor election 2026-03-06T23:39:08.039 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: audit 2026-03-06T22:39:03.692844+0000 mon.vm02 (mon.0) 240 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-06T23:39:08.039 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: audit 2026-03-06T22:39:03.692844+0000 mon.vm02 (mon.0) 240 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-06T23:39:08.039 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: audit 2026-03-06T22:39:04.693076+0000 mon.vm02 (mon.0) 241 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-06T23:39:08.039 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: audit 2026-03-06T22:39:04.693076+0000 mon.vm02 (mon.0) 241 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-06T23:39:08.039 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: cluster 2026-03-06T22:39:04.698802+0000 mon.vm07 (mon.1) 1 : cluster [INF] mon.vm07 calling monitor election 2026-03-06T23:39:08.039 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: cluster 2026-03-06T22:39:04.698802+0000 mon.vm07 (mon.1) 1 : cluster [INF] mon.vm07 calling monitor election 2026-03-06T23:39:08.039 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: audit 2026-03-06T22:39:05.692986+0000 mon.vm02 (mon.0) 242 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-06T23:39:08.039 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: audit 2026-03-06T22:39:05.692986+0000 mon.vm02 (mon.0) 242 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-06T23:39:08.039 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: audit 2026-03-06T22:39:05.798495+0000 mon.vm02 (mon.0) 243 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:39:08.039 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: audit 2026-03-06T22:39:05.798495+0000 mon.vm02 (mon.0) 243 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:39:08.039 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: audit 2026-03-06T22:39:06.693438+0000 mon.vm02 (mon.0) 244 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-06T23:39:08.039 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: audit 2026-03-06T22:39:06.693438+0000 mon.vm02 (mon.0) 244 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-06T23:39:08.039 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: audit 2026-03-06T22:39:07.693184+0000 mon.vm02 (mon.0) 245 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-06T23:39:08.039 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: audit 2026-03-06T22:39:07.693184+0000 mon.vm02 (mon.0) 245 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-06T23:39:08.039 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: cluster 2026-03-06T22:39:07.702792+0000 mon.vm02 (mon.0) 246 : cluster [INF] mon.vm02 is new leader, mons vm02,vm07 in quorum (ranks 0,1) 2026-03-06T23:39:08.039 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: cluster 2026-03-06T22:39:07.702792+0000 mon.vm02 (mon.0) 246 : cluster [INF] mon.vm02 is new leader, mons vm02,vm07 in quorum (ranks 0,1) 2026-03-06T23:39:08.040 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: cluster 2026-03-06T22:39:07.707359+0000 mon.vm02 (mon.0) 247 : cluster [DBG] monmap epoch 2 2026-03-06T23:39:08.040 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: cluster 2026-03-06T22:39:07.707359+0000 mon.vm02 (mon.0) 247 : cluster [DBG] monmap epoch 2 2026-03-06T23:39:08.040 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: cluster 2026-03-06T22:39:07.707403+0000 mon.vm02 (mon.0) 248 : cluster [DBG] fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 2026-03-06T23:39:08.040 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: cluster 2026-03-06T22:39:07.707403+0000 mon.vm02 (mon.0) 248 : cluster [DBG] fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 2026-03-06T23:39:08.040 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: cluster 2026-03-06T22:39:07.707444+0000 mon.vm02 (mon.0) 249 : cluster [DBG] last_changed 2026-03-06T22:39:02.693988+0000 2026-03-06T23:39:08.040 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: cluster 2026-03-06T22:39:07.707444+0000 mon.vm02 (mon.0) 249 : cluster [DBG] last_changed 2026-03-06T22:39:02.693988+0000 2026-03-06T23:39:08.040 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: cluster 2026-03-06T22:39:07.707485+0000 mon.vm02 (mon.0) 250 : cluster [DBG] created 2026-03-06T22:37:18.048883+0000 2026-03-06T23:39:08.040 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: cluster 2026-03-06T22:39:07.707485+0000 mon.vm02 (mon.0) 250 : cluster [DBG] created 2026-03-06T22:37:18.048883+0000 2026-03-06T23:39:08.040 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: cluster 2026-03-06T22:39:07.707525+0000 mon.vm02 (mon.0) 251 : cluster [DBG] min_mon_release 19 (squid) 2026-03-06T23:39:08.040 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: cluster 2026-03-06T22:39:07.707525+0000 mon.vm02 (mon.0) 251 : cluster [DBG] min_mon_release 19 (squid) 2026-03-06T23:39:08.040 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: cluster 2026-03-06T22:39:07.707566+0000 mon.vm02 (mon.0) 252 : cluster [DBG] election_strategy: 1 2026-03-06T23:39:08.040 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: cluster 2026-03-06T22:39:07.707566+0000 mon.vm02 (mon.0) 252 : cluster [DBG] election_strategy: 1 2026-03-06T23:39:08.040 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: cluster 2026-03-06T22:39:07.707610+0000 mon.vm02 (mon.0) 253 : cluster [DBG] 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.vm02 2026-03-06T23:39:08.040 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: cluster 2026-03-06T22:39:07.707610+0000 mon.vm02 (mon.0) 253 : cluster [DBG] 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.vm02 2026-03-06T23:39:08.040 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: cluster 2026-03-06T22:39:07.707651+0000 mon.vm02 (mon.0) 254 : cluster [DBG] 1: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.vm07 2026-03-06T23:39:08.040 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: cluster 2026-03-06T22:39:07.707651+0000 mon.vm02 (mon.0) 254 : cluster [DBG] 1: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.vm07 2026-03-06T23:39:08.040 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: cluster 2026-03-06T22:39:07.708048+0000 mon.vm02 (mon.0) 255 : cluster [DBG] fsmap 2026-03-06T23:39:08.040 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: cluster 2026-03-06T22:39:07.708048+0000 mon.vm02 (mon.0) 255 : cluster [DBG] fsmap 2026-03-06T23:39:08.040 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: cluster 2026-03-06T22:39:07.708132+0000 mon.vm02 (mon.0) 256 : cluster [DBG] osdmap e5: 0 total, 0 up, 0 in 2026-03-06T23:39:08.040 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: cluster 2026-03-06T22:39:07.708132+0000 mon.vm02 (mon.0) 256 : cluster [DBG] osdmap e5: 0 total, 0 up, 0 in 2026-03-06T23:39:08.040 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: cluster 2026-03-06T22:39:07.708364+0000 mon.vm02 (mon.0) 257 : cluster [DBG] mgrmap e17: vm02.opvwec(active, since 16s) 2026-03-06T23:39:08.040 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: cluster 2026-03-06T22:39:07.708364+0000 mon.vm02 (mon.0) 257 : cluster [DBG] mgrmap e17: vm02.opvwec(active, since 16s) 2026-03-06T23:39:08.040 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: cluster 2026-03-06T22:39:07.708625+0000 mon.vm02 (mon.0) 258 : cluster [INF] overall HEALTH_OK 2026-03-06T23:39:08.040 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: cluster 2026-03-06T22:39:07.708625+0000 mon.vm02 (mon.0) 258 : cluster [INF] overall HEALTH_OK 2026-03-06T23:39:08.040 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: audit 2026-03-06T22:39:07.714147+0000 mon.vm02 (mon.0) 259 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:08.040 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: audit 2026-03-06T22:39:07.714147+0000 mon.vm02 (mon.0) 259 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:08.040 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: audit 2026-03-06T22:39:07.718780+0000 mon.vm02 (mon.0) 260 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:08.040 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: audit 2026-03-06T22:39:07.718780+0000 mon.vm02 (mon.0) 260 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:08.040 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: audit 2026-03-06T22:39:07.723056+0000 mon.vm02 (mon.0) 261 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:08.040 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: audit 2026-03-06T22:39:07.723056+0000 mon.vm02 (mon.0) 261 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:08.040 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: audit 2026-03-06T22:39:07.723934+0000 mon.vm02 (mon.0) 262 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:08.040 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: audit 2026-03-06T22:39:07.723934+0000 mon.vm02 (mon.0) 262 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:08.040 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: audit 2026-03-06T22:39:07.724891+0000 mon.vm02 (mon.0) 263 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T23:39:08.040 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:07 vm02 bash[17013]: audit 2026-03-06T22:39:07.724891+0000 mon.vm02 (mon.0) 263 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T23:39:08.877 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:08 vm07 bash[20848]: cephadm 2026-03-06T22:39:07.725893+0000 mgr.vm02.opvwec (mgr.14199) 21 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-06T23:39:08.877 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:08 vm07 bash[20848]: cephadm 2026-03-06T22:39:07.725893+0000 mgr.vm02.opvwec (mgr.14199) 21 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-06T23:39:08.877 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:08 vm07 bash[20848]: cephadm 2026-03-06T22:39:07.726034+0000 mgr.vm02.opvwec (mgr.14199) 22 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-06T23:39:08.877 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:08 vm07 bash[20848]: cephadm 2026-03-06T22:39:07.726034+0000 mgr.vm02.opvwec (mgr.14199) 22 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-06T23:39:08.877 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:08 vm07 bash[20848]: cephadm 2026-03-06T22:39:07.760402+0000 mgr.vm02.opvwec (mgr.14199) 23 : cephadm [INF] Updating vm02:/var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/config/ceph.conf 2026-03-06T23:39:08.877 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:08 vm07 bash[20848]: cephadm 2026-03-06T22:39:07.760402+0000 mgr.vm02.opvwec (mgr.14199) 23 : cephadm [INF] Updating vm02:/var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/config/ceph.conf 2026-03-06T23:39:08.877 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:08 vm07 bash[20848]: cephadm 2026-03-06T22:39:07.769084+0000 mgr.vm02.opvwec (mgr.14199) 24 : cephadm [INF] Updating vm07:/var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/config/ceph.conf 2026-03-06T23:39:08.877 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:08 vm07 bash[20848]: cephadm 2026-03-06T22:39:07.769084+0000 mgr.vm02.opvwec (mgr.14199) 24 : cephadm [INF] Updating vm07:/var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/config/ceph.conf 2026-03-06T23:39:08.877 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:08 vm07 bash[20848]: audit 2026-03-06T22:39:07.804846+0000 mon.vm02 (mon.0) 264 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:08.877 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:08 vm07 bash[20848]: audit 2026-03-06T22:39:07.804846+0000 mon.vm02 (mon.0) 264 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:08.877 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:08 vm07 bash[20848]: audit 2026-03-06T22:39:07.809176+0000 mon.vm02 (mon.0) 265 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:08.877 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:08 vm07 bash[20848]: audit 2026-03-06T22:39:07.809176+0000 mon.vm02 (mon.0) 265 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:08.877 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:08 vm07 bash[20848]: audit 2026-03-06T22:39:07.813601+0000 mon.vm02 (mon.0) 266 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:08.877 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:08 vm07 bash[20848]: audit 2026-03-06T22:39:07.813601+0000 mon.vm02 (mon.0) 266 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:08.877 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:08 vm07 bash[20848]: audit 2026-03-06T22:39:07.817275+0000 mon.vm02 (mon.0) 267 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:08.877 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:08 vm07 bash[20848]: audit 2026-03-06T22:39:07.817275+0000 mon.vm02 (mon.0) 267 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:08.877 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:08 vm07 bash[20848]: audit 2026-03-06T22:39:07.820360+0000 mon.vm02 (mon.0) 268 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:08.877 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:08 vm07 bash[20848]: audit 2026-03-06T22:39:07.820360+0000 mon.vm02 (mon.0) 268 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:08.877 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:08 vm07 bash[20848]: cephadm 2026-03-06T22:39:07.829962+0000 mgr.vm02.opvwec (mgr.14199) 25 : cephadm [INF] Reconfiguring grafana.vm02 (dependencies changed)... 2026-03-06T23:39:08.878 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:08 vm07 bash[20848]: cephadm 2026-03-06T22:39:07.829962+0000 mgr.vm02.opvwec (mgr.14199) 25 : cephadm [INF] Reconfiguring grafana.vm02 (dependencies changed)... 2026-03-06T23:39:08.878 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:08 vm07 bash[20848]: cephadm 2026-03-06T22:39:07.863635+0000 mgr.vm02.opvwec (mgr.14199) 26 : cephadm [INF] Reconfiguring daemon grafana.vm02 on vm02 2026-03-06T23:39:08.878 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:08 vm07 bash[20848]: cephadm 2026-03-06T22:39:07.863635+0000 mgr.vm02.opvwec (mgr.14199) 26 : cephadm [INF] Reconfiguring daemon grafana.vm02 on vm02 2026-03-06T23:39:08.878 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:08 vm07 bash[20848]: audit 2026-03-06T22:39:07.937710+0000 mon.vm02 (mon.0) 269 : audit [DBG] from='client.? 192.168.123.107:0/1291232449' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-06T23:39:08.878 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:08 vm07 bash[20848]: audit 2026-03-06T22:39:07.937710+0000 mon.vm02 (mon.0) 269 : audit [DBG] from='client.? 192.168.123.107:0/1291232449' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-06T23:39:08.878 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:08 vm07 bash[20848]: audit 2026-03-06T22:39:08.606919+0000 mon.vm02 (mon.0) 270 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:08.878 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:08 vm07 bash[20848]: audit 2026-03-06T22:39:08.606919+0000 mon.vm02 (mon.0) 270 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:08.878 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:08 vm07 bash[20848]: audit 2026-03-06T22:39:08.611478+0000 mon.vm02 (mon.0) 271 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:08.878 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:08 vm07 bash[20848]: audit 2026-03-06T22:39:08.611478+0000 mon.vm02 (mon.0) 271 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:08.878 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:08 vm07 bash[20848]: cephadm 2026-03-06T22:39:08.612945+0000 mgr.vm02.opvwec (mgr.14199) 27 : cephadm [INF] Reconfiguring alertmanager.vm02 (dependencies changed)... 2026-03-06T23:39:08.878 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:08 vm07 bash[20848]: cephadm 2026-03-06T22:39:08.612945+0000 mgr.vm02.opvwec (mgr.14199) 27 : cephadm [INF] Reconfiguring alertmanager.vm02 (dependencies changed)... 2026-03-06T23:39:08.878 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:08 vm07 bash[20848]: cephadm 2026-03-06T22:39:08.616852+0000 mgr.vm02.opvwec (mgr.14199) 28 : cephadm [INF] Reconfiguring daemon alertmanager.vm02 on vm02 2026-03-06T23:39:08.878 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:08 vm07 bash[20848]: cephadm 2026-03-06T22:39:08.616852+0000 mgr.vm02.opvwec (mgr.14199) 28 : cephadm [INF] Reconfiguring daemon alertmanager.vm02 on vm02 2026-03-06T23:39:08.878 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:08 vm07 bash[20848]: audit 2026-03-06T22:39:08.700389+0000 mon.vm02 (mon.0) 272 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-06T23:39:08.878 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:08 vm07 bash[20848]: audit 2026-03-06T22:39:08.700389+0000 mon.vm02 (mon.0) 272 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-06T23:39:08.936 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:08 vm02 bash[17013]: cephadm 2026-03-06T22:39:07.725893+0000 mgr.vm02.opvwec (mgr.14199) 21 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-06T23:39:08.936 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:08 vm02 bash[17013]: cephadm 2026-03-06T22:39:07.725893+0000 mgr.vm02.opvwec (mgr.14199) 21 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-06T23:39:08.936 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:08 vm02 bash[17013]: cephadm 2026-03-06T22:39:07.726034+0000 mgr.vm02.opvwec (mgr.14199) 22 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-06T23:39:08.936 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:08 vm02 bash[17013]: cephadm 2026-03-06T22:39:07.726034+0000 mgr.vm02.opvwec (mgr.14199) 22 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-06T23:39:08.936 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:08 vm02 bash[17013]: cephadm 2026-03-06T22:39:07.760402+0000 mgr.vm02.opvwec (mgr.14199) 23 : cephadm [INF] Updating vm02:/var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/config/ceph.conf 2026-03-06T23:39:08.936 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:08 vm02 bash[17013]: cephadm 2026-03-06T22:39:07.760402+0000 mgr.vm02.opvwec (mgr.14199) 23 : cephadm [INF] Updating vm02:/var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/config/ceph.conf 2026-03-06T23:39:08.936 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:08 vm02 bash[17013]: cephadm 2026-03-06T22:39:07.769084+0000 mgr.vm02.opvwec (mgr.14199) 24 : cephadm [INF] Updating vm07:/var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/config/ceph.conf 2026-03-06T23:39:08.936 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:08 vm02 bash[17013]: cephadm 2026-03-06T22:39:07.769084+0000 mgr.vm02.opvwec (mgr.14199) 24 : cephadm [INF] Updating vm07:/var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/config/ceph.conf 2026-03-06T23:39:08.936 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:08 vm02 bash[17013]: audit 2026-03-06T22:39:07.804846+0000 mon.vm02 (mon.0) 264 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:08.936 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:08 vm02 bash[17013]: audit 2026-03-06T22:39:07.804846+0000 mon.vm02 (mon.0) 264 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:08.936 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:08 vm02 bash[17013]: audit 2026-03-06T22:39:07.809176+0000 mon.vm02 (mon.0) 265 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:08.936 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:08 vm02 bash[17013]: audit 2026-03-06T22:39:07.809176+0000 mon.vm02 (mon.0) 265 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:08.936 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:08 vm02 bash[17013]: audit 2026-03-06T22:39:07.813601+0000 mon.vm02 (mon.0) 266 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:08.936 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:08 vm02 bash[17013]: audit 2026-03-06T22:39:07.813601+0000 mon.vm02 (mon.0) 266 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:08.937 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:08 vm02 bash[17013]: audit 2026-03-06T22:39:07.817275+0000 mon.vm02 (mon.0) 267 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:08.937 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:08 vm02 bash[17013]: audit 2026-03-06T22:39:07.817275+0000 mon.vm02 (mon.0) 267 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:08.937 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:08 vm02 bash[17013]: audit 2026-03-06T22:39:07.820360+0000 mon.vm02 (mon.0) 268 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:08.937 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:08 vm02 bash[17013]: audit 2026-03-06T22:39:07.820360+0000 mon.vm02 (mon.0) 268 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:08.937 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:08 vm02 bash[17013]: cephadm 2026-03-06T22:39:07.829962+0000 mgr.vm02.opvwec (mgr.14199) 25 : cephadm [INF] Reconfiguring grafana.vm02 (dependencies changed)... 2026-03-06T23:39:08.937 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:08 vm02 bash[17013]: cephadm 2026-03-06T22:39:07.829962+0000 mgr.vm02.opvwec (mgr.14199) 25 : cephadm [INF] Reconfiguring grafana.vm02 (dependencies changed)... 2026-03-06T23:39:08.937 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:08 vm02 bash[17013]: cephadm 2026-03-06T22:39:07.863635+0000 mgr.vm02.opvwec (mgr.14199) 26 : cephadm [INF] Reconfiguring daemon grafana.vm02 on vm02 2026-03-06T23:39:08.937 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:08 vm02 bash[17013]: cephadm 2026-03-06T22:39:07.863635+0000 mgr.vm02.opvwec (mgr.14199) 26 : cephadm [INF] Reconfiguring daemon grafana.vm02 on vm02 2026-03-06T23:39:08.937 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:08 vm02 bash[17013]: audit 2026-03-06T22:39:07.937710+0000 mon.vm02 (mon.0) 269 : audit [DBG] from='client.? 192.168.123.107:0/1291232449' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-06T23:39:08.937 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:08 vm02 bash[17013]: audit 2026-03-06T22:39:07.937710+0000 mon.vm02 (mon.0) 269 : audit [DBG] from='client.? 192.168.123.107:0/1291232449' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-06T23:39:08.937 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:08 vm02 bash[17013]: audit 2026-03-06T22:39:08.606919+0000 mon.vm02 (mon.0) 270 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:08.937 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:08 vm02 bash[17013]: audit 2026-03-06T22:39:08.606919+0000 mon.vm02 (mon.0) 270 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:08.937 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:08 vm02 bash[17013]: audit 2026-03-06T22:39:08.611478+0000 mon.vm02 (mon.0) 271 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:08.937 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:08 vm02 bash[17013]: audit 2026-03-06T22:39:08.611478+0000 mon.vm02 (mon.0) 271 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:08.937 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:08 vm02 bash[17013]: cephadm 2026-03-06T22:39:08.612945+0000 mgr.vm02.opvwec (mgr.14199) 27 : cephadm [INF] Reconfiguring alertmanager.vm02 (dependencies changed)... 2026-03-06T23:39:08.937 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:08 vm02 bash[17013]: cephadm 2026-03-06T22:39:08.612945+0000 mgr.vm02.opvwec (mgr.14199) 27 : cephadm [INF] Reconfiguring alertmanager.vm02 (dependencies changed)... 2026-03-06T23:39:08.937 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:08 vm02 bash[17013]: cephadm 2026-03-06T22:39:08.616852+0000 mgr.vm02.opvwec (mgr.14199) 28 : cephadm [INF] Reconfiguring daemon alertmanager.vm02 on vm02 2026-03-06T23:39:08.937 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:08 vm02 bash[17013]: cephadm 2026-03-06T22:39:08.616852+0000 mgr.vm02.opvwec (mgr.14199) 28 : cephadm [INF] Reconfiguring daemon alertmanager.vm02 on vm02 2026-03-06T23:39:08.937 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:08 vm02 bash[17013]: audit 2026-03-06T22:39:08.700389+0000 mon.vm02 (mon.0) 272 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-06T23:39:08.937 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:08 vm02 bash[17013]: audit 2026-03-06T22:39:08.700389+0000 mon.vm02 (mon.0) 272 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-06T23:39:10.534 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:10 vm07 bash[20848]: audit 2026-03-06T22:39:09.308452+0000 mon.vm02 (mon.0) 273 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:10.534 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:10 vm07 bash[20848]: audit 2026-03-06T22:39:09.308452+0000 mon.vm02 (mon.0) 273 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:10.534 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:10 vm07 bash[20848]: audit 2026-03-06T22:39:09.316608+0000 mon.vm02 (mon.0) 274 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:10.534 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:10 vm07 bash[20848]: audit 2026-03-06T22:39:09.316608+0000 mon.vm02 (mon.0) 274 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:10.534 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:10 vm07 bash[20848]: cephadm 2026-03-06T22:39:09.317354+0000 mgr.vm02.opvwec (mgr.14199) 29 : cephadm [INF] Reconfiguring mon.vm02 (unknown last config time)... 2026-03-06T23:39:10.534 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:10 vm07 bash[20848]: cephadm 2026-03-06T22:39:09.317354+0000 mgr.vm02.opvwec (mgr.14199) 29 : cephadm [INF] Reconfiguring mon.vm02 (unknown last config time)... 2026-03-06T23:39:10.534 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:10 vm07 bash[20848]: audit 2026-03-06T22:39:09.318989+0000 mon.vm02 (mon.0) 275 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-06T23:39:10.535 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:10 vm07 bash[20848]: audit 2026-03-06T22:39:09.318989+0000 mon.vm02 (mon.0) 275 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-06T23:39:10.535 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:10 vm07 bash[20848]: audit 2026-03-06T22:39:09.319949+0000 mon.vm02 (mon.0) 276 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-06T23:39:10.535 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:10 vm07 bash[20848]: audit 2026-03-06T22:39:09.319949+0000 mon.vm02 (mon.0) 276 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-06T23:39:10.535 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:10 vm07 bash[20848]: audit 2026-03-06T22:39:09.320325+0000 mon.vm02 (mon.0) 277 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:10.535 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:10 vm07 bash[20848]: audit 2026-03-06T22:39:09.320325+0000 mon.vm02 (mon.0) 277 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:10.535 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:10 vm07 bash[20848]: cephadm 2026-03-06T22:39:09.320892+0000 mgr.vm02.opvwec (mgr.14199) 30 : cephadm [INF] Reconfiguring daemon mon.vm02 on vm02 2026-03-06T23:39:10.535 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:10 vm07 bash[20848]: cephadm 2026-03-06T22:39:09.320892+0000 mgr.vm02.opvwec (mgr.14199) 30 : cephadm [INF] Reconfiguring daemon mon.vm02 on vm02 2026-03-06T23:39:10.535 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:10 vm07 bash[20848]: audit 2026-03-06T22:39:09.703456+0000 mon.vm02 (mon.0) 278 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:10.535 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:10 vm07 bash[20848]: audit 2026-03-06T22:39:09.703456+0000 mon.vm02 (mon.0) 278 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:10.535 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:10 vm07 bash[20848]: audit 2026-03-06T22:39:09.709401+0000 mon.vm02 (mon.0) 279 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:10.535 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:10 vm07 bash[20848]: audit 2026-03-06T22:39:09.709401+0000 mon.vm02 (mon.0) 279 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:10.535 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:10 vm07 bash[20848]: audit 2026-03-06T22:39:09.710844+0000 mon.vm02 (mon.0) 280 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm02", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-06T23:39:10.535 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:10 vm07 bash[20848]: audit 2026-03-06T22:39:09.710844+0000 mon.vm02 (mon.0) 280 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm02", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-06T23:39:10.535 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:10 vm07 bash[20848]: audit 2026-03-06T22:39:09.711488+0000 mon.vm02 (mon.0) 281 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:10.535 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:10 vm07 bash[20848]: audit 2026-03-06T22:39:09.711488+0000 mon.vm02 (mon.0) 281 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:10.535 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:10 vm07 bash[20848]: audit 2026-03-06T22:39:10.076150+0000 mon.vm02 (mon.0) 282 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:10.535 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:10 vm07 bash[20848]: audit 2026-03-06T22:39:10.076150+0000 mon.vm02 (mon.0) 282 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:10.535 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:10 vm07 bash[20848]: audit 2026-03-06T22:39:10.080565+0000 mon.vm02 (mon.0) 283 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:10.535 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:10 vm07 bash[20848]: audit 2026-03-06T22:39:10.080565+0000 mon.vm02 (mon.0) 283 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:10.579 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:10 vm02 bash[17013]: audit 2026-03-06T22:39:09.308452+0000 mon.vm02 (mon.0) 273 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:10.579 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:10 vm02 bash[17013]: audit 2026-03-06T22:39:09.308452+0000 mon.vm02 (mon.0) 273 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:10.580 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:10 vm02 bash[17013]: audit 2026-03-06T22:39:09.316608+0000 mon.vm02 (mon.0) 274 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:10.580 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:10 vm02 bash[17013]: audit 2026-03-06T22:39:09.316608+0000 mon.vm02 (mon.0) 274 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:10.580 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:10 vm02 bash[17013]: cephadm 2026-03-06T22:39:09.317354+0000 mgr.vm02.opvwec (mgr.14199) 29 : cephadm [INF] Reconfiguring mon.vm02 (unknown last config time)... 2026-03-06T23:39:10.580 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:10 vm02 bash[17013]: cephadm 2026-03-06T22:39:09.317354+0000 mgr.vm02.opvwec (mgr.14199) 29 : cephadm [INF] Reconfiguring mon.vm02 (unknown last config time)... 2026-03-06T23:39:10.580 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:10 vm02 bash[17013]: audit 2026-03-06T22:39:09.318989+0000 mon.vm02 (mon.0) 275 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-06T23:39:10.580 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:10 vm02 bash[17013]: audit 2026-03-06T22:39:09.318989+0000 mon.vm02 (mon.0) 275 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-06T23:39:10.580 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:10 vm02 bash[17013]: audit 2026-03-06T22:39:09.319949+0000 mon.vm02 (mon.0) 276 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-06T23:39:10.580 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:10 vm02 bash[17013]: audit 2026-03-06T22:39:09.319949+0000 mon.vm02 (mon.0) 276 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-06T23:39:10.580 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:10 vm02 bash[17013]: audit 2026-03-06T22:39:09.320325+0000 mon.vm02 (mon.0) 277 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:10.580 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:10 vm02 bash[17013]: audit 2026-03-06T22:39:09.320325+0000 mon.vm02 (mon.0) 277 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:10.580 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:10 vm02 bash[17013]: cephadm 2026-03-06T22:39:09.320892+0000 mgr.vm02.opvwec (mgr.14199) 30 : cephadm [INF] Reconfiguring daemon mon.vm02 on vm02 2026-03-06T23:39:10.580 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:10 vm02 bash[17013]: cephadm 2026-03-06T22:39:09.320892+0000 mgr.vm02.opvwec (mgr.14199) 30 : cephadm [INF] Reconfiguring daemon mon.vm02 on vm02 2026-03-06T23:39:10.580 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:10 vm02 bash[17013]: audit 2026-03-06T22:39:09.703456+0000 mon.vm02 (mon.0) 278 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:10.580 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:10 vm02 bash[17013]: audit 2026-03-06T22:39:09.703456+0000 mon.vm02 (mon.0) 278 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:10.580 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:10 vm02 bash[17013]: audit 2026-03-06T22:39:09.709401+0000 mon.vm02 (mon.0) 279 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:10.580 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:10 vm02 bash[17013]: audit 2026-03-06T22:39:09.709401+0000 mon.vm02 (mon.0) 279 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:10.580 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:10 vm02 bash[17013]: audit 2026-03-06T22:39:09.710844+0000 mon.vm02 (mon.0) 280 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm02", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-06T23:39:10.580 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:10 vm02 bash[17013]: audit 2026-03-06T22:39:09.710844+0000 mon.vm02 (mon.0) 280 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm02", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-06T23:39:10.580 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:10 vm02 bash[17013]: audit 2026-03-06T22:39:09.711488+0000 mon.vm02 (mon.0) 281 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:10.580 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:10 vm02 bash[17013]: audit 2026-03-06T22:39:09.711488+0000 mon.vm02 (mon.0) 281 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:10.580 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:10 vm02 bash[17013]: audit 2026-03-06T22:39:10.076150+0000 mon.vm02 (mon.0) 282 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:10.580 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:10 vm02 bash[17013]: audit 2026-03-06T22:39:10.076150+0000 mon.vm02 (mon.0) 282 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:10.580 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:10 vm02 bash[17013]: audit 2026-03-06T22:39:10.080565+0000 mon.vm02 (mon.0) 283 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:10.580 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:10 vm02 bash[17013]: audit 2026-03-06T22:39:10.080565+0000 mon.vm02 (mon.0) 283 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:11.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:11 vm02 bash[17013]: cephadm 2026-03-06T22:39:09.710124+0000 mgr.vm02.opvwec (mgr.14199) 31 : cephadm [INF] Reconfiguring crash.vm02 (monmap changed)... 2026-03-06T23:39:11.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:11 vm02 bash[17013]: cephadm 2026-03-06T22:39:09.710124+0000 mgr.vm02.opvwec (mgr.14199) 31 : cephadm [INF] Reconfiguring crash.vm02 (monmap changed)... 2026-03-06T23:39:11.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:11 vm02 bash[17013]: cephadm 2026-03-06T22:39:09.712015+0000 mgr.vm02.opvwec (mgr.14199) 32 : cephadm [INF] Reconfiguring daemon crash.vm02 on vm02 2026-03-06T23:39:11.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:11 vm02 bash[17013]: cephadm 2026-03-06T22:39:09.712015+0000 mgr.vm02.opvwec (mgr.14199) 32 : cephadm [INF] Reconfiguring daemon crash.vm02 on vm02 2026-03-06T23:39:11.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:11 vm02 bash[17013]: cephadm 2026-03-06T22:39:10.081189+0000 mgr.vm02.opvwec (mgr.14199) 33 : cephadm [INF] Reconfiguring prometheus.vm02 (dependencies changed)... 2026-03-06T23:39:11.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:11 vm02 bash[17013]: cephadm 2026-03-06T22:39:10.081189+0000 mgr.vm02.opvwec (mgr.14199) 33 : cephadm [INF] Reconfiguring prometheus.vm02 (dependencies changed)... 2026-03-06T23:39:11.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:11 vm02 bash[17013]: cephadm 2026-03-06T22:39:10.238325+0000 mgr.vm02.opvwec (mgr.14199) 34 : cephadm [INF] Reconfiguring daemon prometheus.vm02 on vm02 2026-03-06T23:39:11.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:11 vm02 bash[17013]: cephadm 2026-03-06T22:39:10.238325+0000 mgr.vm02.opvwec (mgr.14199) 34 : cephadm [INF] Reconfiguring daemon prometheus.vm02 on vm02 2026-03-06T23:39:11.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:11 vm02 bash[17013]: audit 2026-03-06T22:39:10.850555+0000 mon.vm02 (mon.0) 284 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:11.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:11 vm02 bash[17013]: audit 2026-03-06T22:39:10.850555+0000 mon.vm02 (mon.0) 284 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:11.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:11 vm02 bash[17013]: audit 2026-03-06T22:39:10.856012+0000 mon.vm02 (mon.0) 285 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:11.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:11 vm02 bash[17013]: audit 2026-03-06T22:39:10.856012+0000 mon.vm02 (mon.0) 285 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:11.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:11 vm02 bash[17013]: audit 2026-03-06T22:39:10.857343+0000 mon.vm02 (mon.0) 286 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm02.opvwec", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-06T23:39:11.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:11 vm02 bash[17013]: audit 2026-03-06T22:39:10.857343+0000 mon.vm02 (mon.0) 286 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm02.opvwec", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-06T23:39:11.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:11 vm02 bash[17013]: audit 2026-03-06T22:39:10.857882+0000 mon.vm02 (mon.0) 287 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-06T23:39:11.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:11 vm02 bash[17013]: audit 2026-03-06T22:39:10.857882+0000 mon.vm02 (mon.0) 287 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-06T23:39:11.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:11 vm02 bash[17013]: audit 2026-03-06T22:39:10.858380+0000 mon.vm02 (mon.0) 288 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:11.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:11 vm02 bash[17013]: audit 2026-03-06T22:39:10.858380+0000 mon.vm02 (mon.0) 288 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:11.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:11 vm02 bash[17013]: audit 2026-03-06T22:39:11.263356+0000 mon.vm02 (mon.0) 289 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:11.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:11 vm02 bash[17013]: audit 2026-03-06T22:39:11.263356+0000 mon.vm02 (mon.0) 289 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:11.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:11 vm02 bash[17013]: audit 2026-03-06T22:39:11.268673+0000 mon.vm02 (mon.0) 290 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:11.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:11 vm02 bash[17013]: audit 2026-03-06T22:39:11.268673+0000 mon.vm02 (mon.0) 290 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:11.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:11 vm02 bash[17013]: audit 2026-03-06T22:39:11.270406+0000 mon.vm02 (mon.0) 291 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm02", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-06T23:39:11.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:11 vm02 bash[17013]: audit 2026-03-06T22:39:11.270406+0000 mon.vm02 (mon.0) 291 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm02", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-06T23:39:11.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:11 vm02 bash[17013]: audit 2026-03-06T22:39:11.270943+0000 mon.vm02 (mon.0) 292 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:11.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:11 vm02 bash[17013]: audit 2026-03-06T22:39:11.270943+0000 mon.vm02 (mon.0) 292 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:11.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:11 vm07 bash[20848]: cephadm 2026-03-06T22:39:09.710124+0000 mgr.vm02.opvwec (mgr.14199) 31 : cephadm [INF] Reconfiguring crash.vm02 (monmap changed)... 2026-03-06T23:39:11.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:11 vm07 bash[20848]: cephadm 2026-03-06T22:39:09.710124+0000 mgr.vm02.opvwec (mgr.14199) 31 : cephadm [INF] Reconfiguring crash.vm02 (monmap changed)... 2026-03-06T23:39:11.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:11 vm07 bash[20848]: cephadm 2026-03-06T22:39:09.712015+0000 mgr.vm02.opvwec (mgr.14199) 32 : cephadm [INF] Reconfiguring daemon crash.vm02 on vm02 2026-03-06T23:39:11.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:11 vm07 bash[20848]: cephadm 2026-03-06T22:39:09.712015+0000 mgr.vm02.opvwec (mgr.14199) 32 : cephadm [INF] Reconfiguring daemon crash.vm02 on vm02 2026-03-06T23:39:11.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:11 vm07 bash[20848]: cephadm 2026-03-06T22:39:10.081189+0000 mgr.vm02.opvwec (mgr.14199) 33 : cephadm [INF] Reconfiguring prometheus.vm02 (dependencies changed)... 2026-03-06T23:39:11.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:11 vm07 bash[20848]: cephadm 2026-03-06T22:39:10.081189+0000 mgr.vm02.opvwec (mgr.14199) 33 : cephadm [INF] Reconfiguring prometheus.vm02 (dependencies changed)... 2026-03-06T23:39:11.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:11 vm07 bash[20848]: cephadm 2026-03-06T22:39:10.238325+0000 mgr.vm02.opvwec (mgr.14199) 34 : cephadm [INF] Reconfiguring daemon prometheus.vm02 on vm02 2026-03-06T23:39:11.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:11 vm07 bash[20848]: cephadm 2026-03-06T22:39:10.238325+0000 mgr.vm02.opvwec (mgr.14199) 34 : cephadm [INF] Reconfiguring daemon prometheus.vm02 on vm02 2026-03-06T23:39:11.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:11 vm07 bash[20848]: audit 2026-03-06T22:39:10.850555+0000 mon.vm02 (mon.0) 284 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:11.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:11 vm07 bash[20848]: audit 2026-03-06T22:39:10.850555+0000 mon.vm02 (mon.0) 284 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:11.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:11 vm07 bash[20848]: audit 2026-03-06T22:39:10.856012+0000 mon.vm02 (mon.0) 285 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:11.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:11 vm07 bash[20848]: audit 2026-03-06T22:39:10.856012+0000 mon.vm02 (mon.0) 285 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:11.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:11 vm07 bash[20848]: audit 2026-03-06T22:39:10.857343+0000 mon.vm02 (mon.0) 286 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm02.opvwec", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-06T23:39:11.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:11 vm07 bash[20848]: audit 2026-03-06T22:39:10.857343+0000 mon.vm02 (mon.0) 286 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm02.opvwec", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-06T23:39:11.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:11 vm07 bash[20848]: audit 2026-03-06T22:39:10.857882+0000 mon.vm02 (mon.0) 287 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-06T23:39:11.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:11 vm07 bash[20848]: audit 2026-03-06T22:39:10.857882+0000 mon.vm02 (mon.0) 287 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-06T23:39:11.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:11 vm07 bash[20848]: audit 2026-03-06T22:39:10.858380+0000 mon.vm02 (mon.0) 288 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:11.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:11 vm07 bash[20848]: audit 2026-03-06T22:39:10.858380+0000 mon.vm02 (mon.0) 288 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:11.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:11 vm07 bash[20848]: audit 2026-03-06T22:39:11.263356+0000 mon.vm02 (mon.0) 289 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:11.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:11 vm07 bash[20848]: audit 2026-03-06T22:39:11.263356+0000 mon.vm02 (mon.0) 289 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:11.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:11 vm07 bash[20848]: audit 2026-03-06T22:39:11.268673+0000 mon.vm02 (mon.0) 290 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:11.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:11 vm07 bash[20848]: audit 2026-03-06T22:39:11.268673+0000 mon.vm02 (mon.0) 290 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:11.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:11 vm07 bash[20848]: audit 2026-03-06T22:39:11.270406+0000 mon.vm02 (mon.0) 291 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm02", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-06T23:39:11.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:11 vm07 bash[20848]: audit 2026-03-06T22:39:11.270406+0000 mon.vm02 (mon.0) 291 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm02", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-06T23:39:11.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:11 vm07 bash[20848]: audit 2026-03-06T22:39:11.270943+0000 mon.vm02 (mon.0) 292 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:11.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:11 vm07 bash[20848]: audit 2026-03-06T22:39:11.270943+0000 mon.vm02 (mon.0) 292 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:12.660 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: cluster 2026-03-06T22:39:10.747165+0000 mgr.vm02.opvwec (mgr.14199) 35 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:12.660 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: cluster 2026-03-06T22:39:10.747165+0000 mgr.vm02.opvwec (mgr.14199) 35 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:12.660 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: cephadm 2026-03-06T22:39:10.856676+0000 mgr.vm02.opvwec (mgr.14199) 36 : cephadm [INF] Reconfiguring mgr.vm02.opvwec (unknown last config time)... 2026-03-06T23:39:12.660 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: cephadm 2026-03-06T22:39:10.856676+0000 mgr.vm02.opvwec (mgr.14199) 36 : cephadm [INF] Reconfiguring mgr.vm02.opvwec (unknown last config time)... 2026-03-06T23:39:12.661 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: cephadm 2026-03-06T22:39:10.859104+0000 mgr.vm02.opvwec (mgr.14199) 37 : cephadm [INF] Reconfiguring daemon mgr.vm02.opvwec on vm02 2026-03-06T23:39:12.661 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: cephadm 2026-03-06T22:39:10.859104+0000 mgr.vm02.opvwec (mgr.14199) 37 : cephadm [INF] Reconfiguring daemon mgr.vm02.opvwec on vm02 2026-03-06T23:39:12.661 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: cephadm 2026-03-06T22:39:11.269484+0000 mgr.vm02.opvwec (mgr.14199) 38 : cephadm [INF] Reconfiguring ceph-exporter.vm02 (monmap changed)... 2026-03-06T23:39:12.661 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: cephadm 2026-03-06T22:39:11.269484+0000 mgr.vm02.opvwec (mgr.14199) 38 : cephadm [INF] Reconfiguring ceph-exporter.vm02 (monmap changed)... 2026-03-06T23:39:12.661 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: cephadm 2026-03-06T22:39:11.271423+0000 mgr.vm02.opvwec (mgr.14199) 39 : cephadm [INF] Reconfiguring daemon ceph-exporter.vm02 on vm02 2026-03-06T23:39:12.661 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: cephadm 2026-03-06T22:39:11.271423+0000 mgr.vm02.opvwec (mgr.14199) 39 : cephadm [INF] Reconfiguring daemon ceph-exporter.vm02 on vm02 2026-03-06T23:39:12.661 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: audit 2026-03-06T22:39:11.650092+0000 mon.vm02 (mon.0) 293 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:12.661 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: audit 2026-03-06T22:39:11.650092+0000 mon.vm02 (mon.0) 293 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:12.661 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: audit 2026-03-06T22:39:11.654049+0000 mon.vm02 (mon.0) 294 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:12.661 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: audit 2026-03-06T22:39:11.654049+0000 mon.vm02 (mon.0) 294 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:12.661 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: cephadm 2026-03-06T22:39:11.654700+0000 mgr.vm02.opvwec (mgr.14199) 40 : cephadm [INF] Reconfiguring mon.vm07 (monmap changed)... 2026-03-06T23:39:12.661 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: cephadm 2026-03-06T22:39:11.654700+0000 mgr.vm02.opvwec (mgr.14199) 40 : cephadm [INF] Reconfiguring mon.vm07 (monmap changed)... 2026-03-06T23:39:12.661 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: audit 2026-03-06T22:39:11.654878+0000 mon.vm02 (mon.0) 295 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-06T23:39:12.661 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: audit 2026-03-06T22:39:11.654878+0000 mon.vm02 (mon.0) 295 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-06T23:39:12.661 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: audit 2026-03-06T22:39:11.655323+0000 mon.vm02 (mon.0) 296 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-06T23:39:12.661 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: audit 2026-03-06T22:39:11.655323+0000 mon.vm02 (mon.0) 296 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-06T23:39:12.661 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: audit 2026-03-06T22:39:11.655750+0000 mon.vm02 (mon.0) 297 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:12.661 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: audit 2026-03-06T22:39:11.655750+0000 mon.vm02 (mon.0) 297 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:12.661 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: cephadm 2026-03-06T22:39:11.656324+0000 mgr.vm02.opvwec (mgr.14199) 41 : cephadm [INF] Reconfiguring daemon mon.vm07 on vm07 2026-03-06T23:39:12.661 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: cephadm 2026-03-06T22:39:11.656324+0000 mgr.vm02.opvwec (mgr.14199) 41 : cephadm [INF] Reconfiguring daemon mon.vm07 on vm07 2026-03-06T23:39:12.924 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: cluster 2026-03-06T22:39:12.045236+0000 mon.vm02 (mon.0) 298 : cluster [DBG] Standby manager daemon vm07.jbleen started 2026-03-06T23:39:12.924 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: cluster 2026-03-06T22:39:12.045236+0000 mon.vm02 (mon.0) 298 : cluster [DBG] Standby manager daemon vm07.jbleen started 2026-03-06T23:39:12.924 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: audit 2026-03-06T22:39:12.046672+0000 mon.vm02 (mon.0) 299 : audit [DBG] from='mgr.? 192.168.123.107:0/1327799273' entity='mgr.vm07.jbleen' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm07.jbleen/crt"}]: dispatch 2026-03-06T23:39:12.924 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: audit 2026-03-06T22:39:12.046672+0000 mon.vm02 (mon.0) 299 : audit [DBG] from='mgr.? 192.168.123.107:0/1327799273' entity='mgr.vm07.jbleen' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm07.jbleen/crt"}]: dispatch 2026-03-06T23:39:12.924 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: audit 2026-03-06T22:39:12.047692+0000 mon.vm02 (mon.0) 300 : audit [DBG] from='mgr.? 192.168.123.107:0/1327799273' entity='mgr.vm07.jbleen' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-06T23:39:12.924 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: audit 2026-03-06T22:39:12.047692+0000 mon.vm02 (mon.0) 300 : audit [DBG] from='mgr.? 192.168.123.107:0/1327799273' entity='mgr.vm07.jbleen' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-06T23:39:12.924 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: audit 2026-03-06T22:39:12.050333+0000 mon.vm02 (mon.0) 301 : audit [DBG] from='mgr.? 192.168.123.107:0/1327799273' entity='mgr.vm07.jbleen' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm07.jbleen/key"}]: dispatch 2026-03-06T23:39:12.924 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: audit 2026-03-06T22:39:12.050333+0000 mon.vm02 (mon.0) 301 : audit [DBG] from='mgr.? 192.168.123.107:0/1327799273' entity='mgr.vm07.jbleen' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm07.jbleen/key"}]: dispatch 2026-03-06T23:39:12.924 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: audit 2026-03-06T22:39:12.053037+0000 mon.vm02 (mon.0) 302 : audit [DBG] from='mgr.? 192.168.123.107:0/1327799273' entity='mgr.vm07.jbleen' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-06T23:39:12.924 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: audit 2026-03-06T22:39:12.053037+0000 mon.vm02 (mon.0) 302 : audit [DBG] from='mgr.? 192.168.123.107:0/1327799273' entity='mgr.vm07.jbleen' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-06T23:39:12.924 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: audit 2026-03-06T22:39:12.085155+0000 mon.vm02 (mon.0) 303 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:12.924 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: audit 2026-03-06T22:39:12.085155+0000 mon.vm02 (mon.0) 303 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:12.924 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: audit 2026-03-06T22:39:12.089492+0000 mon.vm02 (mon.0) 304 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:12.924 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: audit 2026-03-06T22:39:12.089492+0000 mon.vm02 (mon.0) 304 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:12.924 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: audit 2026-03-06T22:39:12.090464+0000 mon.vm02 (mon.0) 305 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm07.jbleen", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-06T23:39:12.924 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: audit 2026-03-06T22:39:12.090464+0000 mon.vm02 (mon.0) 305 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm07.jbleen", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-06T23:39:12.924 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: audit 2026-03-06T22:39:12.090887+0000 mon.vm02 (mon.0) 306 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-06T23:39:12.924 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: audit 2026-03-06T22:39:12.090887+0000 mon.vm02 (mon.0) 306 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-06T23:39:12.924 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: audit 2026-03-06T22:39:12.091210+0000 mon.vm02 (mon.0) 307 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:12.924 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: audit 2026-03-06T22:39:12.091210+0000 mon.vm02 (mon.0) 307 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:12.924 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: audit 2026-03-06T22:39:12.455965+0000 mon.vm02 (mon.0) 308 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:12.924 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: audit 2026-03-06T22:39:12.455965+0000 mon.vm02 (mon.0) 308 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:12.924 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: audit 2026-03-06T22:39:12.459259+0000 mon.vm02 (mon.0) 309 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:12.925 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: audit 2026-03-06T22:39:12.459259+0000 mon.vm02 (mon.0) 309 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:12.925 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: audit 2026-03-06T22:39:12.460048+0000 mon.vm02 (mon.0) 310 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm07", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-06T23:39:12.925 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: audit 2026-03-06T22:39:12.460048+0000 mon.vm02 (mon.0) 310 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm07", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-06T23:39:12.925 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: audit 2026-03-06T22:39:12.460518+0000 mon.vm02 (mon.0) 311 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:12.925 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:12 vm07 bash[20848]: audit 2026-03-06T22:39:12.460518+0000 mon.vm02 (mon.0) 311 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:12.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: cluster 2026-03-06T22:39:10.747165+0000 mgr.vm02.opvwec (mgr.14199) 35 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:12.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: cluster 2026-03-06T22:39:10.747165+0000 mgr.vm02.opvwec (mgr.14199) 35 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:12.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: cephadm 2026-03-06T22:39:10.856676+0000 mgr.vm02.opvwec (mgr.14199) 36 : cephadm [INF] Reconfiguring mgr.vm02.opvwec (unknown last config time)... 2026-03-06T23:39:12.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: cephadm 2026-03-06T22:39:10.856676+0000 mgr.vm02.opvwec (mgr.14199) 36 : cephadm [INF] Reconfiguring mgr.vm02.opvwec (unknown last config time)... 2026-03-06T23:39:12.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: cephadm 2026-03-06T22:39:10.859104+0000 mgr.vm02.opvwec (mgr.14199) 37 : cephadm [INF] Reconfiguring daemon mgr.vm02.opvwec on vm02 2026-03-06T23:39:12.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: cephadm 2026-03-06T22:39:10.859104+0000 mgr.vm02.opvwec (mgr.14199) 37 : cephadm [INF] Reconfiguring daemon mgr.vm02.opvwec on vm02 2026-03-06T23:39:12.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: cephadm 2026-03-06T22:39:11.269484+0000 mgr.vm02.opvwec (mgr.14199) 38 : cephadm [INF] Reconfiguring ceph-exporter.vm02 (monmap changed)... 2026-03-06T23:39:12.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: cephadm 2026-03-06T22:39:11.269484+0000 mgr.vm02.opvwec (mgr.14199) 38 : cephadm [INF] Reconfiguring ceph-exporter.vm02 (monmap changed)... 2026-03-06T23:39:12.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: cephadm 2026-03-06T22:39:11.271423+0000 mgr.vm02.opvwec (mgr.14199) 39 : cephadm [INF] Reconfiguring daemon ceph-exporter.vm02 on vm02 2026-03-06T23:39:12.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: cephadm 2026-03-06T22:39:11.271423+0000 mgr.vm02.opvwec (mgr.14199) 39 : cephadm [INF] Reconfiguring daemon ceph-exporter.vm02 on vm02 2026-03-06T23:39:12.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: audit 2026-03-06T22:39:11.650092+0000 mon.vm02 (mon.0) 293 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:12.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: audit 2026-03-06T22:39:11.650092+0000 mon.vm02 (mon.0) 293 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:12.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: audit 2026-03-06T22:39:11.654049+0000 mon.vm02 (mon.0) 294 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:12.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: audit 2026-03-06T22:39:11.654049+0000 mon.vm02 (mon.0) 294 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:12.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: cephadm 2026-03-06T22:39:11.654700+0000 mgr.vm02.opvwec (mgr.14199) 40 : cephadm [INF] Reconfiguring mon.vm07 (monmap changed)... 2026-03-06T23:39:12.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: cephadm 2026-03-06T22:39:11.654700+0000 mgr.vm02.opvwec (mgr.14199) 40 : cephadm [INF] Reconfiguring mon.vm07 (monmap changed)... 2026-03-06T23:39:12.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: audit 2026-03-06T22:39:11.654878+0000 mon.vm02 (mon.0) 295 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-06T23:39:12.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: audit 2026-03-06T22:39:11.654878+0000 mon.vm02 (mon.0) 295 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-06T23:39:12.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: audit 2026-03-06T22:39:11.655323+0000 mon.vm02 (mon.0) 296 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-06T23:39:12.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: audit 2026-03-06T22:39:11.655323+0000 mon.vm02 (mon.0) 296 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-06T23:39:12.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: audit 2026-03-06T22:39:11.655750+0000 mon.vm02 (mon.0) 297 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:12.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: audit 2026-03-06T22:39:11.655750+0000 mon.vm02 (mon.0) 297 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:12.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: cephadm 2026-03-06T22:39:11.656324+0000 mgr.vm02.opvwec (mgr.14199) 41 : cephadm [INF] Reconfiguring daemon mon.vm07 on vm07 2026-03-06T23:39:12.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: cephadm 2026-03-06T22:39:11.656324+0000 mgr.vm02.opvwec (mgr.14199) 41 : cephadm [INF] Reconfiguring daemon mon.vm07 on vm07 2026-03-06T23:39:12.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: cluster 2026-03-06T22:39:12.045236+0000 mon.vm02 (mon.0) 298 : cluster [DBG] Standby manager daemon vm07.jbleen started 2026-03-06T23:39:12.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: cluster 2026-03-06T22:39:12.045236+0000 mon.vm02 (mon.0) 298 : cluster [DBG] Standby manager daemon vm07.jbleen started 2026-03-06T23:39:12.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: audit 2026-03-06T22:39:12.046672+0000 mon.vm02 (mon.0) 299 : audit [DBG] from='mgr.? 192.168.123.107:0/1327799273' entity='mgr.vm07.jbleen' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm07.jbleen/crt"}]: dispatch 2026-03-06T23:39:12.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: audit 2026-03-06T22:39:12.046672+0000 mon.vm02 (mon.0) 299 : audit [DBG] from='mgr.? 192.168.123.107:0/1327799273' entity='mgr.vm07.jbleen' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm07.jbleen/crt"}]: dispatch 2026-03-06T23:39:12.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: audit 2026-03-06T22:39:12.047692+0000 mon.vm02 (mon.0) 300 : audit [DBG] from='mgr.? 192.168.123.107:0/1327799273' entity='mgr.vm07.jbleen' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-06T23:39:12.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: audit 2026-03-06T22:39:12.047692+0000 mon.vm02 (mon.0) 300 : audit [DBG] from='mgr.? 192.168.123.107:0/1327799273' entity='mgr.vm07.jbleen' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-06T23:39:12.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: audit 2026-03-06T22:39:12.050333+0000 mon.vm02 (mon.0) 301 : audit [DBG] from='mgr.? 192.168.123.107:0/1327799273' entity='mgr.vm07.jbleen' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm07.jbleen/key"}]: dispatch 2026-03-06T23:39:12.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: audit 2026-03-06T22:39:12.050333+0000 mon.vm02 (mon.0) 301 : audit [DBG] from='mgr.? 192.168.123.107:0/1327799273' entity='mgr.vm07.jbleen' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm07.jbleen/key"}]: dispatch 2026-03-06T23:39:12.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: audit 2026-03-06T22:39:12.053037+0000 mon.vm02 (mon.0) 302 : audit [DBG] from='mgr.? 192.168.123.107:0/1327799273' entity='mgr.vm07.jbleen' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-06T23:39:12.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: audit 2026-03-06T22:39:12.053037+0000 mon.vm02 (mon.0) 302 : audit [DBG] from='mgr.? 192.168.123.107:0/1327799273' entity='mgr.vm07.jbleen' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-06T23:39:12.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: audit 2026-03-06T22:39:12.085155+0000 mon.vm02 (mon.0) 303 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:12.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: audit 2026-03-06T22:39:12.085155+0000 mon.vm02 (mon.0) 303 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:12.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: audit 2026-03-06T22:39:12.089492+0000 mon.vm02 (mon.0) 304 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:12.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: audit 2026-03-06T22:39:12.089492+0000 mon.vm02 (mon.0) 304 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:12.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: audit 2026-03-06T22:39:12.090464+0000 mon.vm02 (mon.0) 305 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm07.jbleen", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-06T23:39:12.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: audit 2026-03-06T22:39:12.090464+0000 mon.vm02 (mon.0) 305 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm07.jbleen", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-06T23:39:12.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: audit 2026-03-06T22:39:12.090887+0000 mon.vm02 (mon.0) 306 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-06T23:39:12.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: audit 2026-03-06T22:39:12.090887+0000 mon.vm02 (mon.0) 306 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-06T23:39:12.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: audit 2026-03-06T22:39:12.091210+0000 mon.vm02 (mon.0) 307 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:12.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: audit 2026-03-06T22:39:12.091210+0000 mon.vm02 (mon.0) 307 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:12.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: audit 2026-03-06T22:39:12.455965+0000 mon.vm02 (mon.0) 308 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:12.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: audit 2026-03-06T22:39:12.455965+0000 mon.vm02 (mon.0) 308 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:12.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: audit 2026-03-06T22:39:12.459259+0000 mon.vm02 (mon.0) 309 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:12.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: audit 2026-03-06T22:39:12.459259+0000 mon.vm02 (mon.0) 309 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:12.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: audit 2026-03-06T22:39:12.460048+0000 mon.vm02 (mon.0) 310 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm07", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-06T23:39:12.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: audit 2026-03-06T22:39:12.460048+0000 mon.vm02 (mon.0) 310 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm07", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-06T23:39:12.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: audit 2026-03-06T22:39:12.460518+0000 mon.vm02 (mon.0) 311 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:12.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:12 vm02 bash[17013]: audit 2026-03-06T22:39:12.460518+0000 mon.vm02 (mon.0) 311 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: cephadm 2026-03-06T22:39:12.090291+0000 mgr.vm02.opvwec (mgr.14199) 42 : cephadm [INF] Reconfiguring mgr.vm07.jbleen (monmap changed)... 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: cephadm 2026-03-06T22:39:12.090291+0000 mgr.vm02.opvwec (mgr.14199) 42 : cephadm [INF] Reconfiguring mgr.vm07.jbleen (monmap changed)... 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: cephadm 2026-03-06T22:39:12.091671+0000 mgr.vm02.opvwec (mgr.14199) 43 : cephadm [INF] Reconfiguring daemon mgr.vm07.jbleen on vm07 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: cephadm 2026-03-06T22:39:12.091671+0000 mgr.vm02.opvwec (mgr.14199) 43 : cephadm [INF] Reconfiguring daemon mgr.vm07.jbleen on vm07 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: cephadm 2026-03-06T22:39:12.459879+0000 mgr.vm02.opvwec (mgr.14199) 44 : cephadm [INF] Reconfiguring crash.vm07 (monmap changed)... 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: cephadm 2026-03-06T22:39:12.459879+0000 mgr.vm02.opvwec (mgr.14199) 44 : cephadm [INF] Reconfiguring crash.vm07 (monmap changed)... 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: cephadm 2026-03-06T22:39:12.461175+0000 mgr.vm02.opvwec (mgr.14199) 45 : cephadm [INF] Reconfiguring daemon crash.vm07 on vm07 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: cephadm 2026-03-06T22:39:12.461175+0000 mgr.vm02.opvwec (mgr.14199) 45 : cephadm [INF] Reconfiguring daemon crash.vm07 on vm07 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: cluster 2026-03-06T22:39:12.671219+0000 mon.vm02 (mon.0) 312 : cluster [DBG] mgrmap e18: vm02.opvwec(active, since 21s), standbys: vm07.jbleen 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: cluster 2026-03-06T22:39:12.671219+0000 mon.vm02 (mon.0) 312 : cluster [DBG] mgrmap e18: vm02.opvwec(active, since 21s), standbys: vm07.jbleen 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: audit 2026-03-06T22:39:12.671348+0000 mon.vm02 (mon.0) 313 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mgr metadata", "who": "vm07.jbleen", "id": "vm07.jbleen"}]: dispatch 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: audit 2026-03-06T22:39:12.671348+0000 mon.vm02 (mon.0) 313 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mgr metadata", "who": "vm07.jbleen", "id": "vm07.jbleen"}]: dispatch 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: audit 2026-03-06T22:39:12.855977+0000 mon.vm02 (mon.0) 314 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: audit 2026-03-06T22:39:12.855977+0000 mon.vm02 (mon.0) 314 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: audit 2026-03-06T22:39:12.860154+0000 mon.vm02 (mon.0) 315 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: audit 2026-03-06T22:39:12.860154+0000 mon.vm02 (mon.0) 315 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: audit 2026-03-06T22:39:12.861296+0000 mon.vm02 (mon.0) 316 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm07", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: audit 2026-03-06T22:39:12.861296+0000 mon.vm02 (mon.0) 316 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm07", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: audit 2026-03-06T22:39:12.861816+0000 mon.vm02 (mon.0) 317 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: audit 2026-03-06T22:39:12.861816+0000 mon.vm02 (mon.0) 317 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: audit 2026-03-06T22:39:13.230702+0000 mon.vm02 (mon.0) 318 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: audit 2026-03-06T22:39:13.230702+0000 mon.vm02 (mon.0) 318 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: audit 2026-03-06T22:39:13.234528+0000 mon.vm02 (mon.0) 319 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: audit 2026-03-06T22:39:13.234528+0000 mon.vm02 (mon.0) 319 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: audit 2026-03-06T22:39:13.237040+0000 mon.vm02 (mon.0) 320 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: audit 2026-03-06T22:39:13.237040+0000 mon.vm02 (mon.0) 320 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: audit 2026-03-06T22:39:13.238131+0000 mon.vm02 (mon.0) 321 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm02.local:3000"}]: dispatch 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: audit 2026-03-06T22:39:13.238131+0000 mon.vm02 (mon.0) 321 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm02.local:3000"}]: dispatch 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: audit 2026-03-06T22:39:13.241786+0000 mon.vm02 (mon.0) 322 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: audit 2026-03-06T22:39:13.241786+0000 mon.vm02 (mon.0) 322 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: audit 2026-03-06T22:39:13.251353+0000 mon.vm02 (mon.0) 323 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: audit 2026-03-06T22:39:13.251353+0000 mon.vm02 (mon.0) 323 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: audit 2026-03-06T22:39:13.252153+0000 mon.vm02 (mon.0) 324 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm02.local:9093"}]: dispatch 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: audit 2026-03-06T22:39:13.252153+0000 mon.vm02 (mon.0) 324 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm02.local:9093"}]: dispatch 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: audit 2026-03-06T22:39:13.256730+0000 mon.vm02 (mon.0) 325 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: audit 2026-03-06T22:39:13.256730+0000 mon.vm02 (mon.0) 325 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: audit 2026-03-06T22:39:13.262728+0000 mon.vm02 (mon.0) 326 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: audit 2026-03-06T22:39:13.262728+0000 mon.vm02 (mon.0) 326 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: audit 2026-03-06T22:39:13.264384+0000 mon.vm02 (mon.0) 327 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm02.local:9095"}]: dispatch 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: audit 2026-03-06T22:39:13.264384+0000 mon.vm02 (mon.0) 327 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm02.local:9095"}]: dispatch 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: audit 2026-03-06T22:39:13.268670+0000 mon.vm02 (mon.0) 328 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: audit 2026-03-06T22:39:13.268670+0000 mon.vm02 (mon.0) 328 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: audit 2026-03-06T22:39:13.304651+0000 mon.vm02 (mon.0) 329 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:39:13.892 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:13 vm02 bash[17013]: audit 2026-03-06T22:39:13.304651+0000 mon.vm02 (mon.0) 329 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:39:13.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: cephadm 2026-03-06T22:39:12.090291+0000 mgr.vm02.opvwec (mgr.14199) 42 : cephadm [INF] Reconfiguring mgr.vm07.jbleen (monmap changed)... 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: cephadm 2026-03-06T22:39:12.090291+0000 mgr.vm02.opvwec (mgr.14199) 42 : cephadm [INF] Reconfiguring mgr.vm07.jbleen (monmap changed)... 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: cephadm 2026-03-06T22:39:12.091671+0000 mgr.vm02.opvwec (mgr.14199) 43 : cephadm [INF] Reconfiguring daemon mgr.vm07.jbleen on vm07 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: cephadm 2026-03-06T22:39:12.091671+0000 mgr.vm02.opvwec (mgr.14199) 43 : cephadm [INF] Reconfiguring daemon mgr.vm07.jbleen on vm07 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: cephadm 2026-03-06T22:39:12.459879+0000 mgr.vm02.opvwec (mgr.14199) 44 : cephadm [INF] Reconfiguring crash.vm07 (monmap changed)... 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: cephadm 2026-03-06T22:39:12.459879+0000 mgr.vm02.opvwec (mgr.14199) 44 : cephadm [INF] Reconfiguring crash.vm07 (monmap changed)... 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: cephadm 2026-03-06T22:39:12.461175+0000 mgr.vm02.opvwec (mgr.14199) 45 : cephadm [INF] Reconfiguring daemon crash.vm07 on vm07 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: cephadm 2026-03-06T22:39:12.461175+0000 mgr.vm02.opvwec (mgr.14199) 45 : cephadm [INF] Reconfiguring daemon crash.vm07 on vm07 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: cluster 2026-03-06T22:39:12.671219+0000 mon.vm02 (mon.0) 312 : cluster [DBG] mgrmap e18: vm02.opvwec(active, since 21s), standbys: vm07.jbleen 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: cluster 2026-03-06T22:39:12.671219+0000 mon.vm02 (mon.0) 312 : cluster [DBG] mgrmap e18: vm02.opvwec(active, since 21s), standbys: vm07.jbleen 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: audit 2026-03-06T22:39:12.671348+0000 mon.vm02 (mon.0) 313 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mgr metadata", "who": "vm07.jbleen", "id": "vm07.jbleen"}]: dispatch 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: audit 2026-03-06T22:39:12.671348+0000 mon.vm02 (mon.0) 313 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mgr metadata", "who": "vm07.jbleen", "id": "vm07.jbleen"}]: dispatch 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: audit 2026-03-06T22:39:12.855977+0000 mon.vm02 (mon.0) 314 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: audit 2026-03-06T22:39:12.855977+0000 mon.vm02 (mon.0) 314 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: audit 2026-03-06T22:39:12.860154+0000 mon.vm02 (mon.0) 315 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: audit 2026-03-06T22:39:12.860154+0000 mon.vm02 (mon.0) 315 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: audit 2026-03-06T22:39:12.861296+0000 mon.vm02 (mon.0) 316 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm07", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: audit 2026-03-06T22:39:12.861296+0000 mon.vm02 (mon.0) 316 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm07", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: audit 2026-03-06T22:39:12.861816+0000 mon.vm02 (mon.0) 317 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: audit 2026-03-06T22:39:12.861816+0000 mon.vm02 (mon.0) 317 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: audit 2026-03-06T22:39:13.230702+0000 mon.vm02 (mon.0) 318 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: audit 2026-03-06T22:39:13.230702+0000 mon.vm02 (mon.0) 318 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: audit 2026-03-06T22:39:13.234528+0000 mon.vm02 (mon.0) 319 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: audit 2026-03-06T22:39:13.234528+0000 mon.vm02 (mon.0) 319 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: audit 2026-03-06T22:39:13.237040+0000 mon.vm02 (mon.0) 320 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: audit 2026-03-06T22:39:13.237040+0000 mon.vm02 (mon.0) 320 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: audit 2026-03-06T22:39:13.238131+0000 mon.vm02 (mon.0) 321 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm02.local:3000"}]: dispatch 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: audit 2026-03-06T22:39:13.238131+0000 mon.vm02 (mon.0) 321 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm02.local:3000"}]: dispatch 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: audit 2026-03-06T22:39:13.241786+0000 mon.vm02 (mon.0) 322 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: audit 2026-03-06T22:39:13.241786+0000 mon.vm02 (mon.0) 322 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: audit 2026-03-06T22:39:13.251353+0000 mon.vm02 (mon.0) 323 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: audit 2026-03-06T22:39:13.251353+0000 mon.vm02 (mon.0) 323 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: audit 2026-03-06T22:39:13.252153+0000 mon.vm02 (mon.0) 324 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm02.local:9093"}]: dispatch 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: audit 2026-03-06T22:39:13.252153+0000 mon.vm02 (mon.0) 324 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm02.local:9093"}]: dispatch 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: audit 2026-03-06T22:39:13.256730+0000 mon.vm02 (mon.0) 325 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: audit 2026-03-06T22:39:13.256730+0000 mon.vm02 (mon.0) 325 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: audit 2026-03-06T22:39:13.262728+0000 mon.vm02 (mon.0) 326 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: audit 2026-03-06T22:39:13.262728+0000 mon.vm02 (mon.0) 326 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: audit 2026-03-06T22:39:13.264384+0000 mon.vm02 (mon.0) 327 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm02.local:9095"}]: dispatch 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: audit 2026-03-06T22:39:13.264384+0000 mon.vm02 (mon.0) 327 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm02.local:9095"}]: dispatch 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: audit 2026-03-06T22:39:13.268670+0000 mon.vm02 (mon.0) 328 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: audit 2026-03-06T22:39:13.268670+0000 mon.vm02 (mon.0) 328 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: audit 2026-03-06T22:39:13.304651+0000 mon.vm02 (mon.0) 329 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:39:13.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:13 vm07 bash[20848]: audit 2026-03-06T22:39:13.304651+0000 mon.vm02 (mon.0) 329 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:39:14.863 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:39:14.876 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:14 vm02 bash[17013]: cluster 2026-03-06T22:39:12.747428+0000 mgr.vm02.opvwec (mgr.14199) 46 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:14.876 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:14 vm02 bash[17013]: cluster 2026-03-06T22:39:12.747428+0000 mgr.vm02.opvwec (mgr.14199) 46 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:14.876 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:14 vm02 bash[17013]: cephadm 2026-03-06T22:39:12.860676+0000 mgr.vm02.opvwec (mgr.14199) 47 : cephadm [INF] Reconfiguring ceph-exporter.vm07 (monmap changed)... 2026-03-06T23:39:14.876 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:14 vm02 bash[17013]: cephadm 2026-03-06T22:39:12.860676+0000 mgr.vm02.opvwec (mgr.14199) 47 : cephadm [INF] Reconfiguring ceph-exporter.vm07 (monmap changed)... 2026-03-06T23:39:14.876 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:14 vm02 bash[17013]: cephadm 2026-03-06T22:39:12.862348+0000 mgr.vm02.opvwec (mgr.14199) 48 : cephadm [INF] Reconfiguring daemon ceph-exporter.vm07 on vm07 2026-03-06T23:39:14.876 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:14 vm02 bash[17013]: cephadm 2026-03-06T22:39:12.862348+0000 mgr.vm02.opvwec (mgr.14199) 48 : cephadm [INF] Reconfiguring daemon ceph-exporter.vm07 on vm07 2026-03-06T23:39:14.876 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:14 vm02 bash[17013]: audit 2026-03-06T22:39:13.237356+0000 mgr.vm02.opvwec (mgr.14199) 49 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-06T23:39:14.876 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:14 vm02 bash[17013]: audit 2026-03-06T22:39:13.237356+0000 mgr.vm02.opvwec (mgr.14199) 49 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-06T23:39:14.876 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:14 vm02 bash[17013]: audit 2026-03-06T22:39:13.238310+0000 mgr.vm02.opvwec (mgr.14199) 50 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm02.local:3000"}]: dispatch 2026-03-06T23:39:14.876 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:14 vm02 bash[17013]: audit 2026-03-06T22:39:13.238310+0000 mgr.vm02.opvwec (mgr.14199) 50 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm02.local:3000"}]: dispatch 2026-03-06T23:39:14.876 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:14 vm02 bash[17013]: audit 2026-03-06T22:39:13.251554+0000 mgr.vm02.opvwec (mgr.14199) 51 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-06T23:39:14.876 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:14 vm02 bash[17013]: audit 2026-03-06T22:39:13.251554+0000 mgr.vm02.opvwec (mgr.14199) 51 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-06T23:39:14.876 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:14 vm02 bash[17013]: audit 2026-03-06T22:39:13.252387+0000 mgr.vm02.opvwec (mgr.14199) 52 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm02.local:9093"}]: dispatch 2026-03-06T23:39:14.876 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:14 vm02 bash[17013]: audit 2026-03-06T22:39:13.252387+0000 mgr.vm02.opvwec (mgr.14199) 52 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm02.local:9093"}]: dispatch 2026-03-06T23:39:14.876 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:14 vm02 bash[17013]: audit 2026-03-06T22:39:13.263663+0000 mgr.vm02.opvwec (mgr.14199) 53 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-06T23:39:14.876 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:14 vm02 bash[17013]: audit 2026-03-06T22:39:13.263663+0000 mgr.vm02.opvwec (mgr.14199) 53 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-06T23:39:14.876 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:14 vm02 bash[17013]: audit 2026-03-06T22:39:13.264585+0000 mgr.vm02.opvwec (mgr.14199) 54 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm02.local:9095"}]: dispatch 2026-03-06T23:39:14.876 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:14 vm02 bash[17013]: audit 2026-03-06T22:39:13.264585+0000 mgr.vm02.opvwec (mgr.14199) 54 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm02.local:9095"}]: dispatch 2026-03-06T23:39:14.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:14 vm07 bash[20848]: cluster 2026-03-06T22:39:12.747428+0000 mgr.vm02.opvwec (mgr.14199) 46 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:14.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:14 vm07 bash[20848]: cluster 2026-03-06T22:39:12.747428+0000 mgr.vm02.opvwec (mgr.14199) 46 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:14.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:14 vm07 bash[20848]: cephadm 2026-03-06T22:39:12.860676+0000 mgr.vm02.opvwec (mgr.14199) 47 : cephadm [INF] Reconfiguring ceph-exporter.vm07 (monmap changed)... 2026-03-06T23:39:14.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:14 vm07 bash[20848]: cephadm 2026-03-06T22:39:12.860676+0000 mgr.vm02.opvwec (mgr.14199) 47 : cephadm [INF] Reconfiguring ceph-exporter.vm07 (monmap changed)... 2026-03-06T23:39:14.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:14 vm07 bash[20848]: cephadm 2026-03-06T22:39:12.862348+0000 mgr.vm02.opvwec (mgr.14199) 48 : cephadm [INF] Reconfiguring daemon ceph-exporter.vm07 on vm07 2026-03-06T23:39:14.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:14 vm07 bash[20848]: cephadm 2026-03-06T22:39:12.862348+0000 mgr.vm02.opvwec (mgr.14199) 48 : cephadm [INF] Reconfiguring daemon ceph-exporter.vm07 on vm07 2026-03-06T23:39:14.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:14 vm07 bash[20848]: audit 2026-03-06T22:39:13.237356+0000 mgr.vm02.opvwec (mgr.14199) 49 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-06T23:39:14.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:14 vm07 bash[20848]: audit 2026-03-06T22:39:13.237356+0000 mgr.vm02.opvwec (mgr.14199) 49 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-06T23:39:14.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:14 vm07 bash[20848]: audit 2026-03-06T22:39:13.238310+0000 mgr.vm02.opvwec (mgr.14199) 50 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm02.local:3000"}]: dispatch 2026-03-06T23:39:14.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:14 vm07 bash[20848]: audit 2026-03-06T22:39:13.238310+0000 mgr.vm02.opvwec (mgr.14199) 50 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm02.local:3000"}]: dispatch 2026-03-06T23:39:14.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:14 vm07 bash[20848]: audit 2026-03-06T22:39:13.251554+0000 mgr.vm02.opvwec (mgr.14199) 51 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-06T23:39:14.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:14 vm07 bash[20848]: audit 2026-03-06T22:39:13.251554+0000 mgr.vm02.opvwec (mgr.14199) 51 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-06T23:39:14.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:14 vm07 bash[20848]: audit 2026-03-06T22:39:13.252387+0000 mgr.vm02.opvwec (mgr.14199) 52 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm02.local:9093"}]: dispatch 2026-03-06T23:39:14.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:14 vm07 bash[20848]: audit 2026-03-06T22:39:13.252387+0000 mgr.vm02.opvwec (mgr.14199) 52 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm02.local:9093"}]: dispatch 2026-03-06T23:39:14.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:14 vm07 bash[20848]: audit 2026-03-06T22:39:13.263663+0000 mgr.vm02.opvwec (mgr.14199) 53 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-06T23:39:14.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:14 vm07 bash[20848]: audit 2026-03-06T22:39:13.263663+0000 mgr.vm02.opvwec (mgr.14199) 53 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-06T23:39:14.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:14 vm07 bash[20848]: audit 2026-03-06T22:39:13.264585+0000 mgr.vm02.opvwec (mgr.14199) 54 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm02.local:9095"}]: dispatch 2026-03-06T23:39:14.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:14 vm07 bash[20848]: audit 2026-03-06T22:39:13.264585+0000 mgr.vm02.opvwec (mgr.14199) 54 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm02.local:9095"}]: dispatch 2026-03-06T23:39:15.199 INFO:teuthology.orchestra.run.vm02.stdout:# minimal ceph.conf for f8b8c16a-19ac-11f1-87e7-9b7402b99c44 2026-03-06T23:39:15.199 INFO:teuthology.orchestra.run.vm02.stdout:[global] 2026-03-06T23:39:15.199 INFO:teuthology.orchestra.run.vm02.stdout: fsid = f8b8c16a-19ac-11f1-87e7-9b7402b99c44 2026-03-06T23:39:15.199 INFO:teuthology.orchestra.run.vm02.stdout: mon_host = [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] 2026-03-06T23:39:15.259 INFO:tasks.cephadm:Distributing (final) config and client.admin keyring... 2026-03-06T23:39:15.259 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-06T23:39:15.259 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/etc/ceph/ceph.conf 2026-03-06T23:39:15.266 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-06T23:39:15.266 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-06T23:39:15.317 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-06T23:39:15.317 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/ceph/ceph.conf 2026-03-06T23:39:15.324 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-06T23:39:15.324 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-06T23:39:15.371 INFO:tasks.cephadm:Deploying OSDs... 2026-03-06T23:39:15.371 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-06T23:39:15.371 DEBUG:teuthology.orchestra.run.vm02:> dd if=/scratch_devs of=/dev/stdout 2026-03-06T23:39:15.374 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-06T23:39:15.374 DEBUG:teuthology.orchestra.run.vm02:> ls /dev/[sv]d? 2026-03-06T23:39:15.420 INFO:teuthology.orchestra.run.vm02.stdout:/dev/vda 2026-03-06T23:39:15.420 INFO:teuthology.orchestra.run.vm02.stdout:/dev/vdb 2026-03-06T23:39:15.420 INFO:teuthology.orchestra.run.vm02.stdout:/dev/vdc 2026-03-06T23:39:15.420 INFO:teuthology.orchestra.run.vm02.stdout:/dev/vdd 2026-03-06T23:39:15.420 INFO:teuthology.orchestra.run.vm02.stdout:/dev/vde 2026-03-06T23:39:15.420 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-06T23:39:15.420 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-06T23:39:15.421 DEBUG:teuthology.orchestra.run.vm02:> stat /dev/vdb 2026-03-06T23:39:15.464 INFO:teuthology.orchestra.run.vm02.stdout: File: /dev/vdb 2026-03-06T23:39:15.464 INFO:teuthology.orchestra.run.vm02.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-06T23:39:15.464 INFO:teuthology.orchestra.run.vm02.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-06T23:39:15.464 INFO:teuthology.orchestra.run.vm02.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-06T23:39:15.464 INFO:teuthology.orchestra.run.vm02.stdout:Access: 2026-03-06 23:33:34.926225564 +0100 2026-03-06T23:39:15.465 INFO:teuthology.orchestra.run.vm02.stdout:Modify: 2026-03-06 23:33:33.898225564 +0100 2026-03-06T23:39:15.465 INFO:teuthology.orchestra.run.vm02.stdout:Change: 2026-03-06 23:33:33.898225564 +0100 2026-03-06T23:39:15.465 INFO:teuthology.orchestra.run.vm02.stdout: Birth: - 2026-03-06T23:39:15.465 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-06T23:39:15.512 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records in 2026-03-06T23:39:15.513 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records out 2026-03-06T23:39:15.513 INFO:teuthology.orchestra.run.vm02.stderr:512 bytes copied, 0.000171421 s, 3.0 MB/s 2026-03-06T23:39:15.513 DEBUG:teuthology.orchestra.run.vm02:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-06T23:39:15.561 DEBUG:teuthology.orchestra.run.vm02:> stat /dev/vdc 2026-03-06T23:39:15.608 INFO:teuthology.orchestra.run.vm02.stdout: File: /dev/vdc 2026-03-06T23:39:15.608 INFO:teuthology.orchestra.run.vm02.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-06T23:39:15.608 INFO:teuthology.orchestra.run.vm02.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-06T23:39:15.608 INFO:teuthology.orchestra.run.vm02.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-06T23:39:15.608 INFO:teuthology.orchestra.run.vm02.stdout:Access: 2026-03-06 23:33:34.942225564 +0100 2026-03-06T23:39:15.608 INFO:teuthology.orchestra.run.vm02.stdout:Modify: 2026-03-06 23:33:33.898225564 +0100 2026-03-06T23:39:15.608 INFO:teuthology.orchestra.run.vm02.stdout:Change: 2026-03-06 23:33:33.898225564 +0100 2026-03-06T23:39:15.608 INFO:teuthology.orchestra.run.vm02.stdout: Birth: - 2026-03-06T23:39:15.608 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-06T23:39:15.656 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records in 2026-03-06T23:39:15.656 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records out 2026-03-06T23:39:15.656 INFO:teuthology.orchestra.run.vm02.stderr:512 bytes copied, 0.000179526 s, 2.9 MB/s 2026-03-06T23:39:15.657 DEBUG:teuthology.orchestra.run.vm02:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-06T23:39:15.701 DEBUG:teuthology.orchestra.run.vm02:> stat /dev/vdd 2026-03-06T23:39:15.748 INFO:teuthology.orchestra.run.vm02.stdout: File: /dev/vdd 2026-03-06T23:39:15.748 INFO:teuthology.orchestra.run.vm02.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-06T23:39:15.748 INFO:teuthology.orchestra.run.vm02.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-06T23:39:15.748 INFO:teuthology.orchestra.run.vm02.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-06T23:39:15.748 INFO:teuthology.orchestra.run.vm02.stdout:Access: 2026-03-06 23:33:34.914225564 +0100 2026-03-06T23:39:15.748 INFO:teuthology.orchestra.run.vm02.stdout:Modify: 2026-03-06 23:33:33.902225564 +0100 2026-03-06T23:39:15.748 INFO:teuthology.orchestra.run.vm02.stdout:Change: 2026-03-06 23:33:33.902225564 +0100 2026-03-06T23:39:15.748 INFO:teuthology.orchestra.run.vm02.stdout: Birth: - 2026-03-06T23:39:15.749 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-06T23:39:15.796 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:15 vm02 bash[17013]: audit 2026-03-06T22:39:15.194222+0000 mon.vm02 (mon.0) 330 : audit [DBG] from='client.? 192.168.123.102:0/3860665669' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:15.796 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:15 vm02 bash[17013]: audit 2026-03-06T22:39:15.194222+0000 mon.vm02 (mon.0) 330 : audit [DBG] from='client.? 192.168.123.102:0/3860665669' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:15.798 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records in 2026-03-06T23:39:15.798 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records out 2026-03-06T23:39:15.798 INFO:teuthology.orchestra.run.vm02.stderr:512 bytes copied, 0.000182662 s, 2.8 MB/s 2026-03-06T23:39:15.799 DEBUG:teuthology.orchestra.run.vm02:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-06T23:39:15.846 DEBUG:teuthology.orchestra.run.vm02:> stat /dev/vde 2026-03-06T23:39:15.892 INFO:teuthology.orchestra.run.vm02.stdout: File: /dev/vde 2026-03-06T23:39:15.892 INFO:teuthology.orchestra.run.vm02.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-06T23:39:15.892 INFO:teuthology.orchestra.run.vm02.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-06T23:39:15.892 INFO:teuthology.orchestra.run.vm02.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-06T23:39:15.892 INFO:teuthology.orchestra.run.vm02.stdout:Access: 2026-03-06 23:33:34.930225564 +0100 2026-03-06T23:39:15.892 INFO:teuthology.orchestra.run.vm02.stdout:Modify: 2026-03-06 23:33:33.902225564 +0100 2026-03-06T23:39:15.893 INFO:teuthology.orchestra.run.vm02.stdout:Change: 2026-03-06 23:33:33.902225564 +0100 2026-03-06T23:39:15.893 INFO:teuthology.orchestra.run.vm02.stdout: Birth: - 2026-03-06T23:39:15.893 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-06T23:39:15.941 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records in 2026-03-06T23:39:15.941 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records out 2026-03-06T23:39:15.941 INFO:teuthology.orchestra.run.vm02.stderr:512 bytes copied, 0.000160531 s, 3.2 MB/s 2026-03-06T23:39:15.942 DEBUG:teuthology.orchestra.run.vm02:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-06T23:39:15.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:15 vm07 bash[20848]: audit 2026-03-06T22:39:15.194222+0000 mon.vm02 (mon.0) 330 : audit [DBG] from='client.? 192.168.123.102:0/3860665669' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:15.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:15 vm07 bash[20848]: audit 2026-03-06T22:39:15.194222+0000 mon.vm02 (mon.0) 330 : audit [DBG] from='client.? 192.168.123.102:0/3860665669' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:15.990 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-06T23:39:15.990 DEBUG:teuthology.orchestra.run.vm07:> dd if=/scratch_devs of=/dev/stdout 2026-03-06T23:39:15.993 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-06T23:39:15.993 DEBUG:teuthology.orchestra.run.vm07:> ls /dev/[sv]d? 2026-03-06T23:39:16.039 INFO:teuthology.orchestra.run.vm07.stdout:/dev/vda 2026-03-06T23:39:16.039 INFO:teuthology.orchestra.run.vm07.stdout:/dev/vdb 2026-03-06T23:39:16.039 INFO:teuthology.orchestra.run.vm07.stdout:/dev/vdc 2026-03-06T23:39:16.039 INFO:teuthology.orchestra.run.vm07.stdout:/dev/vdd 2026-03-06T23:39:16.039 INFO:teuthology.orchestra.run.vm07.stdout:/dev/vde 2026-03-06T23:39:16.039 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-06T23:39:16.039 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-06T23:39:16.039 DEBUG:teuthology.orchestra.run.vm07:> stat /dev/vdb 2026-03-06T23:39:16.083 INFO:teuthology.orchestra.run.vm07.stdout: File: /dev/vdb 2026-03-06T23:39:16.083 INFO:teuthology.orchestra.run.vm07.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-06T23:39:16.083 INFO:teuthology.orchestra.run.vm07.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-06T23:39:16.083 INFO:teuthology.orchestra.run.vm07.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-06T23:39:16.083 INFO:teuthology.orchestra.run.vm07.stdout:Access: 2026-03-06 23:34:00.073683274 +0100 2026-03-06T23:39:16.083 INFO:teuthology.orchestra.run.vm07.stdout:Modify: 2026-03-06 23:33:59.033683274 +0100 2026-03-06T23:39:16.083 INFO:teuthology.orchestra.run.vm07.stdout:Change: 2026-03-06 23:33:59.033683274 +0100 2026-03-06T23:39:16.083 INFO:teuthology.orchestra.run.vm07.stdout: Birth: - 2026-03-06T23:39:16.083 DEBUG:teuthology.orchestra.run.vm07:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-06T23:39:16.130 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records in 2026-03-06T23:39:16.131 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records out 2026-03-06T23:39:16.131 INFO:teuthology.orchestra.run.vm07.stderr:512 bytes copied, 0.000221385 s, 2.3 MB/s 2026-03-06T23:39:16.131 DEBUG:teuthology.orchestra.run.vm07:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-06T23:39:16.175 DEBUG:teuthology.orchestra.run.vm07:> stat /dev/vdc 2026-03-06T23:39:16.218 INFO:teuthology.orchestra.run.vm07.stdout: File: /dev/vdc 2026-03-06T23:39:16.219 INFO:teuthology.orchestra.run.vm07.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-06T23:39:16.219 INFO:teuthology.orchestra.run.vm07.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-06T23:39:16.219 INFO:teuthology.orchestra.run.vm07.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-06T23:39:16.219 INFO:teuthology.orchestra.run.vm07.stdout:Access: 2026-03-06 23:34:00.081683274 +0100 2026-03-06T23:39:16.219 INFO:teuthology.orchestra.run.vm07.stdout:Modify: 2026-03-06 23:33:58.997683274 +0100 2026-03-06T23:39:16.219 INFO:teuthology.orchestra.run.vm07.stdout:Change: 2026-03-06 23:33:58.997683274 +0100 2026-03-06T23:39:16.219 INFO:teuthology.orchestra.run.vm07.stdout: Birth: - 2026-03-06T23:39:16.219 DEBUG:teuthology.orchestra.run.vm07:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-06T23:39:16.266 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records in 2026-03-06T23:39:16.266 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records out 2026-03-06T23:39:16.266 INFO:teuthology.orchestra.run.vm07.stderr:512 bytes copied, 0.000152325 s, 3.4 MB/s 2026-03-06T23:39:16.267 DEBUG:teuthology.orchestra.run.vm07:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-06T23:39:16.312 DEBUG:teuthology.orchestra.run.vm07:> stat /dev/vdd 2026-03-06T23:39:16.355 INFO:teuthology.orchestra.run.vm07.stdout: File: /dev/vdd 2026-03-06T23:39:16.355 INFO:teuthology.orchestra.run.vm07.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-06T23:39:16.355 INFO:teuthology.orchestra.run.vm07.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-06T23:39:16.355 INFO:teuthology.orchestra.run.vm07.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-06T23:39:16.355 INFO:teuthology.orchestra.run.vm07.stdout:Access: 2026-03-06 23:34:00.073683274 +0100 2026-03-06T23:39:16.355 INFO:teuthology.orchestra.run.vm07.stdout:Modify: 2026-03-06 23:33:59.037683274 +0100 2026-03-06T23:39:16.355 INFO:teuthology.orchestra.run.vm07.stdout:Change: 2026-03-06 23:33:59.037683274 +0100 2026-03-06T23:39:16.355 INFO:teuthology.orchestra.run.vm07.stdout: Birth: - 2026-03-06T23:39:16.355 DEBUG:teuthology.orchestra.run.vm07:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-06T23:39:16.402 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records in 2026-03-06T23:39:16.402 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records out 2026-03-06T23:39:16.402 INFO:teuthology.orchestra.run.vm07.stderr:512 bytes copied, 0.000203712 s, 2.5 MB/s 2026-03-06T23:39:16.403 DEBUG:teuthology.orchestra.run.vm07:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-06T23:39:16.448 DEBUG:teuthology.orchestra.run.vm07:> stat /dev/vde 2026-03-06T23:39:16.491 INFO:teuthology.orchestra.run.vm07.stdout: File: /dev/vde 2026-03-06T23:39:16.491 INFO:teuthology.orchestra.run.vm07.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-06T23:39:16.491 INFO:teuthology.orchestra.run.vm07.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-06T23:39:16.491 INFO:teuthology.orchestra.run.vm07.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-06T23:39:16.491 INFO:teuthology.orchestra.run.vm07.stdout:Access: 2026-03-06 23:34:00.081683274 +0100 2026-03-06T23:39:16.491 INFO:teuthology.orchestra.run.vm07.stdout:Modify: 2026-03-06 23:33:59.037683274 +0100 2026-03-06T23:39:16.491 INFO:teuthology.orchestra.run.vm07.stdout:Change: 2026-03-06 23:33:59.037683274 +0100 2026-03-06T23:39:16.491 INFO:teuthology.orchestra.run.vm07.stdout: Birth: - 2026-03-06T23:39:16.491 DEBUG:teuthology.orchestra.run.vm07:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-06T23:39:16.538 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records in 2026-03-06T23:39:16.538 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records out 2026-03-06T23:39:16.539 INFO:teuthology.orchestra.run.vm07.stderr:512 bytes copied, 0.000269535 s, 1.9 MB/s 2026-03-06T23:39:16.539 DEBUG:teuthology.orchestra.run.vm07:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-06T23:39:16.588 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph orch apply osd --all-available-devices 2026-03-06T23:39:16.692 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:16 vm07 bash[20848]: cluster 2026-03-06T22:39:14.747622+0000 mgr.vm02.opvwec (mgr.14199) 55 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:16.692 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:16 vm07 bash[20848]: cluster 2026-03-06T22:39:14.747622+0000 mgr.vm02.opvwec (mgr.14199) 55 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:16.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:16 vm02 bash[17013]: cluster 2026-03-06T22:39:14.747622+0000 mgr.vm02.opvwec (mgr.14199) 55 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:16.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:16 vm02 bash[17013]: cluster 2026-03-06T22:39:14.747622+0000 mgr.vm02.opvwec (mgr.14199) 55 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:18.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:18 vm07 bash[20848]: cluster 2026-03-06T22:39:16.747818+0000 mgr.vm02.opvwec (mgr.14199) 56 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:18.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:18 vm07 bash[20848]: cluster 2026-03-06T22:39:16.747818+0000 mgr.vm02.opvwec (mgr.14199) 56 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:18.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:18 vm07 bash[20848]: audit 2026-03-06T22:39:18.290026+0000 mon.vm02 (mon.0) 331 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:18.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:18 vm07 bash[20848]: audit 2026-03-06T22:39:18.290026+0000 mon.vm02 (mon.0) 331 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:18.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:18 vm07 bash[20848]: audit 2026-03-06T22:39:18.299024+0000 mon.vm02 (mon.0) 332 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:18.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:18 vm07 bash[20848]: audit 2026-03-06T22:39:18.299024+0000 mon.vm02 (mon.0) 332 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:18.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:18 vm07 bash[20848]: audit 2026-03-06T22:39:18.398411+0000 mon.vm02 (mon.0) 333 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:18.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:18 vm07 bash[20848]: audit 2026-03-06T22:39:18.398411+0000 mon.vm02 (mon.0) 333 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:18.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:18 vm07 bash[20848]: audit 2026-03-06T22:39:18.404082+0000 mon.vm02 (mon.0) 334 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:18.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:18 vm07 bash[20848]: audit 2026-03-06T22:39:18.404082+0000 mon.vm02 (mon.0) 334 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:18.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:18 vm07 bash[20848]: audit 2026-03-06T22:39:18.405242+0000 mon.vm02 (mon.0) 335 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:18.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:18 vm07 bash[20848]: audit 2026-03-06T22:39:18.405242+0000 mon.vm02 (mon.0) 335 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:18.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:18 vm07 bash[20848]: audit 2026-03-06T22:39:18.405762+0000 mon.vm02 (mon.0) 336 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T23:39:18.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:18 vm07 bash[20848]: audit 2026-03-06T22:39:18.405762+0000 mon.vm02 (mon.0) 336 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T23:39:18.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:18 vm07 bash[20848]: audit 2026-03-06T22:39:18.410123+0000 mon.vm02 (mon.0) 337 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:18.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:18 vm07 bash[20848]: audit 2026-03-06T22:39:18.410123+0000 mon.vm02 (mon.0) 337 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:18.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:18 vm02 bash[17013]: cluster 2026-03-06T22:39:16.747818+0000 mgr.vm02.opvwec (mgr.14199) 56 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:18.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:18 vm02 bash[17013]: cluster 2026-03-06T22:39:16.747818+0000 mgr.vm02.opvwec (mgr.14199) 56 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:18.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:18 vm02 bash[17013]: audit 2026-03-06T22:39:18.290026+0000 mon.vm02 (mon.0) 331 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:18.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:18 vm02 bash[17013]: audit 2026-03-06T22:39:18.290026+0000 mon.vm02 (mon.0) 331 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:18.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:18 vm02 bash[17013]: audit 2026-03-06T22:39:18.299024+0000 mon.vm02 (mon.0) 332 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:18.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:18 vm02 bash[17013]: audit 2026-03-06T22:39:18.299024+0000 mon.vm02 (mon.0) 332 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:18.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:18 vm02 bash[17013]: audit 2026-03-06T22:39:18.398411+0000 mon.vm02 (mon.0) 333 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:18.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:18 vm02 bash[17013]: audit 2026-03-06T22:39:18.398411+0000 mon.vm02 (mon.0) 333 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:18.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:18 vm02 bash[17013]: audit 2026-03-06T22:39:18.404082+0000 mon.vm02 (mon.0) 334 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:18.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:18 vm02 bash[17013]: audit 2026-03-06T22:39:18.404082+0000 mon.vm02 (mon.0) 334 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:18.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:18 vm02 bash[17013]: audit 2026-03-06T22:39:18.405242+0000 mon.vm02 (mon.0) 335 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:18.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:18 vm02 bash[17013]: audit 2026-03-06T22:39:18.405242+0000 mon.vm02 (mon.0) 335 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:18.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:18 vm02 bash[17013]: audit 2026-03-06T22:39:18.405762+0000 mon.vm02 (mon.0) 336 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T23:39:18.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:18 vm02 bash[17013]: audit 2026-03-06T22:39:18.405762+0000 mon.vm02 (mon.0) 336 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T23:39:18.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:18 vm02 bash[17013]: audit 2026-03-06T22:39:18.410123+0000 mon.vm02 (mon.0) 337 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:18.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:18 vm02 bash[17013]: audit 2026-03-06T22:39:18.410123+0000 mon.vm02 (mon.0) 337 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:20.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:20 vm07 bash[20848]: cluster 2026-03-06T22:39:18.748063+0000 mgr.vm02.opvwec (mgr.14199) 57 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:20.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:20 vm07 bash[20848]: cluster 2026-03-06T22:39:18.748063+0000 mgr.vm02.opvwec (mgr.14199) 57 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:20.992 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:20 vm02 bash[17013]: cluster 2026-03-06T22:39:18.748063+0000 mgr.vm02.opvwec (mgr.14199) 57 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:20.992 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:20 vm02 bash[17013]: cluster 2026-03-06T22:39:18.748063+0000 mgr.vm02.opvwec (mgr.14199) 57 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:21.083 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm07/config 2026-03-06T23:39:21.445 INFO:teuthology.orchestra.run.vm07.stdout:Scheduled osd.all-available-devices update... 2026-03-06T23:39:21.517 INFO:tasks.cephadm:Waiting for 8 OSDs to come up... 2026-03-06T23:39:21.517 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph osd stat -f json 2026-03-06T23:39:21.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:21 vm07 bash[20848]: audit 2026-03-06T22:39:20.799032+0000 mon.vm02 (mon.0) 338 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:39:21.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:21 vm07 bash[20848]: audit 2026-03-06T22:39:20.799032+0000 mon.vm02 (mon.0) 338 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:39:21.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:21 vm07 bash[20848]: audit 2026-03-06T22:39:21.439787+0000 mon.vm02 (mon.0) 339 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:21.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:21 vm07 bash[20848]: audit 2026-03-06T22:39:21.439787+0000 mon.vm02 (mon.0) 339 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:21.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:21 vm07 bash[20848]: audit 2026-03-06T22:39:21.440854+0000 mon.vm02 (mon.0) 340 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:39:21.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:21 vm07 bash[20848]: audit 2026-03-06T22:39:21.440854+0000 mon.vm02 (mon.0) 340 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:39:21.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:21 vm02 bash[17013]: audit 2026-03-06T22:39:20.799032+0000 mon.vm02 (mon.0) 338 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:39:21.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:21 vm02 bash[17013]: audit 2026-03-06T22:39:20.799032+0000 mon.vm02 (mon.0) 338 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:39:21.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:21 vm02 bash[17013]: audit 2026-03-06T22:39:21.439787+0000 mon.vm02 (mon.0) 339 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:21.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:21 vm02 bash[17013]: audit 2026-03-06T22:39:21.439787+0000 mon.vm02 (mon.0) 339 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:21.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:21 vm02 bash[17013]: audit 2026-03-06T22:39:21.440854+0000 mon.vm02 (mon.0) 340 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:39:21.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:21 vm02 bash[17013]: audit 2026-03-06T22:39:21.440854+0000 mon.vm02 (mon.0) 340 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:39:22.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:22 vm07 bash[20848]: cluster 2026-03-06T22:39:20.748304+0000 mgr.vm02.opvwec (mgr.14199) 58 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:22.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:22 vm07 bash[20848]: cluster 2026-03-06T22:39:20.748304+0000 mgr.vm02.opvwec (mgr.14199) 58 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:22.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:22 vm07 bash[20848]: audit 2026-03-06T22:39:21.433784+0000 mgr.vm02.opvwec (mgr.14199) 59 : audit [DBG] from='client.24105 -' entity='client.admin' cmd=[{"prefix": "orch apply osd", "all_available_devices": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:39:22.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:22 vm07 bash[20848]: audit 2026-03-06T22:39:21.433784+0000 mgr.vm02.opvwec (mgr.14199) 59 : audit [DBG] from='client.24105 -' entity='client.admin' cmd=[{"prefix": "orch apply osd", "all_available_devices": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:39:22.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:22 vm07 bash[20848]: cephadm 2026-03-06T22:39:21.434716+0000 mgr.vm02.opvwec (mgr.14199) 60 : cephadm [INF] Marking host: vm02 for OSDSpec preview refresh. 2026-03-06T23:39:22.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:22 vm07 bash[20848]: cephadm 2026-03-06T22:39:21.434716+0000 mgr.vm02.opvwec (mgr.14199) 60 : cephadm [INF] Marking host: vm02 for OSDSpec preview refresh. 2026-03-06T23:39:22.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:22 vm07 bash[20848]: cephadm 2026-03-06T22:39:21.434736+0000 mgr.vm02.opvwec (mgr.14199) 61 : cephadm [INF] Marking host: vm07 for OSDSpec preview refresh. 2026-03-06T23:39:22.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:22 vm07 bash[20848]: cephadm 2026-03-06T22:39:21.434736+0000 mgr.vm02.opvwec (mgr.14199) 61 : cephadm [INF] Marking host: vm07 for OSDSpec preview refresh. 2026-03-06T23:39:22.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:22 vm07 bash[20848]: cephadm 2026-03-06T22:39:21.434889+0000 mgr.vm02.opvwec (mgr.14199) 62 : cephadm [INF] Saving service osd.all-available-devices spec with placement * 2026-03-06T23:39:22.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:22 vm07 bash[20848]: cephadm 2026-03-06T22:39:21.434889+0000 mgr.vm02.opvwec (mgr.14199) 62 : cephadm [INF] Saving service osd.all-available-devices spec with placement * 2026-03-06T23:39:22.992 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:22 vm02 bash[17013]: cluster 2026-03-06T22:39:20.748304+0000 mgr.vm02.opvwec (mgr.14199) 58 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:22.992 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:22 vm02 bash[17013]: cluster 2026-03-06T22:39:20.748304+0000 mgr.vm02.opvwec (mgr.14199) 58 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:22.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:22 vm02 bash[17013]: audit 2026-03-06T22:39:21.433784+0000 mgr.vm02.opvwec (mgr.14199) 59 : audit [DBG] from='client.24105 -' entity='client.admin' cmd=[{"prefix": "orch apply osd", "all_available_devices": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:39:22.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:22 vm02 bash[17013]: audit 2026-03-06T22:39:21.433784+0000 mgr.vm02.opvwec (mgr.14199) 59 : audit [DBG] from='client.24105 -' entity='client.admin' cmd=[{"prefix": "orch apply osd", "all_available_devices": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:39:22.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:22 vm02 bash[17013]: cephadm 2026-03-06T22:39:21.434716+0000 mgr.vm02.opvwec (mgr.14199) 60 : cephadm [INF] Marking host: vm02 for OSDSpec preview refresh. 2026-03-06T23:39:22.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:22 vm02 bash[17013]: cephadm 2026-03-06T22:39:21.434716+0000 mgr.vm02.opvwec (mgr.14199) 60 : cephadm [INF] Marking host: vm02 for OSDSpec preview refresh. 2026-03-06T23:39:22.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:22 vm02 bash[17013]: cephadm 2026-03-06T22:39:21.434736+0000 mgr.vm02.opvwec (mgr.14199) 61 : cephadm [INF] Marking host: vm07 for OSDSpec preview refresh. 2026-03-06T23:39:22.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:22 vm02 bash[17013]: cephadm 2026-03-06T22:39:21.434736+0000 mgr.vm02.opvwec (mgr.14199) 61 : cephadm [INF] Marking host: vm07 for OSDSpec preview refresh. 2026-03-06T23:39:22.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:22 vm02 bash[17013]: cephadm 2026-03-06T22:39:21.434889+0000 mgr.vm02.opvwec (mgr.14199) 62 : cephadm [INF] Saving service osd.all-available-devices spec with placement * 2026-03-06T23:39:22.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:22 vm02 bash[17013]: cephadm 2026-03-06T22:39:21.434889+0000 mgr.vm02.opvwec (mgr.14199) 62 : cephadm [INF] Saving service osd.all-available-devices spec with placement * 2026-03-06T23:39:23.980 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:23 vm07 bash[20848]: cluster 2026-03-06T22:39:22.748521+0000 mgr.vm02.opvwec (mgr.14199) 63 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:23.980 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:23 vm07 bash[20848]: cluster 2026-03-06T22:39:22.748521+0000 mgr.vm02.opvwec (mgr.14199) 63 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:23.992 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:23 vm02 bash[17013]: cluster 2026-03-06T22:39:22.748521+0000 mgr.vm02.opvwec (mgr.14199) 63 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:23.992 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:23 vm02 bash[17013]: cluster 2026-03-06T22:39:22.748521+0000 mgr.vm02.opvwec (mgr.14199) 63 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:26.228 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:25 vm07 bash[20848]: cluster 2026-03-06T22:39:24.748702+0000 mgr.vm02.opvwec (mgr.14199) 64 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:26.228 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:25 vm07 bash[20848]: cluster 2026-03-06T22:39:24.748702+0000 mgr.vm02.opvwec (mgr.14199) 64 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:26.232 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:39:26.242 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:25 vm02 bash[17013]: cluster 2026-03-06T22:39:24.748702+0000 mgr.vm02.opvwec (mgr.14199) 64 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:26.242 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:25 vm02 bash[17013]: cluster 2026-03-06T22:39:24.748702+0000 mgr.vm02.opvwec (mgr.14199) 64 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:26.649 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:39:26.727 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":5,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0} 2026-03-06T23:39:27.728 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph osd stat -f json 2026-03-06T23:39:27.734 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:27 vm02 bash[17013]: audit 2026-03-06T22:39:26.475369+0000 mon.vm02 (mon.0) 341 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:27.734 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:27 vm02 bash[17013]: audit 2026-03-06T22:39:26.475369+0000 mon.vm02 (mon.0) 341 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:27.734 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:27 vm02 bash[17013]: audit 2026-03-06T22:39:26.480691+0000 mon.vm02 (mon.0) 342 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:27.734 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:27 vm02 bash[17013]: audit 2026-03-06T22:39:26.480691+0000 mon.vm02 (mon.0) 342 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:27.734 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:27 vm02 bash[17013]: audit 2026-03-06T22:39:26.484830+0000 mon.vm02 (mon.0) 343 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:27.734 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:27 vm02 bash[17013]: audit 2026-03-06T22:39:26.484830+0000 mon.vm02 (mon.0) 343 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:27.734 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:27 vm02 bash[17013]: audit 2026-03-06T22:39:26.489461+0000 mon.vm02 (mon.0) 344 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:27.734 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:27 vm02 bash[17013]: audit 2026-03-06T22:39:26.489461+0000 mon.vm02 (mon.0) 344 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:27.734 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:27 vm02 bash[17013]: audit 2026-03-06T22:39:26.644072+0000 mon.vm02 (mon.0) 345 : audit [DBG] from='client.? 192.168.123.102:0/607582902' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-06T23:39:27.734 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:27 vm02 bash[17013]: audit 2026-03-06T22:39:26.644072+0000 mon.vm02 (mon.0) 345 : audit [DBG] from='client.? 192.168.123.102:0/607582902' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-06T23:39:27.734 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:27 vm02 bash[17013]: audit 2026-03-06T22:39:26.828926+0000 mon.vm02 (mon.0) 346 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:27.734 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:27 vm02 bash[17013]: audit 2026-03-06T22:39:26.828926+0000 mon.vm02 (mon.0) 346 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:27.734 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:27 vm02 bash[17013]: audit 2026-03-06T22:39:26.833201+0000 mon.vm02 (mon.0) 347 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:27.734 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:27 vm02 bash[17013]: audit 2026-03-06T22:39:26.833201+0000 mon.vm02 (mon.0) 347 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:27.734 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:27 vm02 bash[17013]: audit 2026-03-06T22:39:26.836870+0000 mon.vm02 (mon.0) 348 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:27.734 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:27 vm02 bash[17013]: audit 2026-03-06T22:39:26.836870+0000 mon.vm02 (mon.0) 348 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:27.734 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:27 vm02 bash[17013]: audit 2026-03-06T22:39:26.840364+0000 mon.vm02 (mon.0) 349 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:27.734 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:27 vm02 bash[17013]: audit 2026-03-06T22:39:26.840364+0000 mon.vm02 (mon.0) 349 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:27.734 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:27 vm02 bash[17013]: audit 2026-03-06T22:39:26.841136+0000 mon.vm02 (mon.0) 350 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:27.734 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:27 vm02 bash[17013]: audit 2026-03-06T22:39:26.841136+0000 mon.vm02 (mon.0) 350 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:27.734 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:27 vm02 bash[17013]: audit 2026-03-06T22:39:26.841700+0000 mon.vm02 (mon.0) 351 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T23:39:27.735 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:27 vm02 bash[17013]: audit 2026-03-06T22:39:26.841700+0000 mon.vm02 (mon.0) 351 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T23:39:27.735 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:27 vm02 bash[17013]: audit 2026-03-06T22:39:26.845273+0000 mon.vm02 (mon.0) 352 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:27.735 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:27 vm02 bash[17013]: audit 2026-03-06T22:39:26.845273+0000 mon.vm02 (mon.0) 352 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:27.735 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:27 vm02 bash[17013]: audit 2026-03-06T22:39:26.847051+0000 mon.vm02 (mon.0) 353 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-06T23:39:27.735 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:27 vm02 bash[17013]: audit 2026-03-06T22:39:26.847051+0000 mon.vm02 (mon.0) 353 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-06T23:39:27.735 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:27 vm02 bash[17013]: audit 2026-03-06T22:39:26.849242+0000 mon.vm02 (mon.0) 354 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-06T23:39:27.735 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:27 vm02 bash[17013]: audit 2026-03-06T22:39:26.849242+0000 mon.vm02 (mon.0) 354 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-06T23:39:27.735 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:27 vm02 bash[17013]: audit 2026-03-06T22:39:26.849771+0000 mon.vm02 (mon.0) 355 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:27.735 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:27 vm02 bash[17013]: audit 2026-03-06T22:39:26.849771+0000 mon.vm02 (mon.0) 355 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:27.735 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:27 vm02 bash[17013]: audit 2026-03-06T22:39:26.851536+0000 mon.vm02 (mon.0) 356 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-06T23:39:27.735 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:27 vm02 bash[17013]: audit 2026-03-06T22:39:26.851536+0000 mon.vm02 (mon.0) 356 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-06T23:39:27.735 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:27 vm02 bash[17013]: audit 2026-03-06T22:39:26.852075+0000 mon.vm02 (mon.0) 357 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:27.735 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:27 vm02 bash[17013]: audit 2026-03-06T22:39:26.852075+0000 mon.vm02 (mon.0) 357 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:27.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:27 vm07 bash[20848]: audit 2026-03-06T22:39:26.475369+0000 mon.vm02 (mon.0) 341 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:27.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:27 vm07 bash[20848]: audit 2026-03-06T22:39:26.475369+0000 mon.vm02 (mon.0) 341 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:27.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:27 vm07 bash[20848]: audit 2026-03-06T22:39:26.480691+0000 mon.vm02 (mon.0) 342 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:27.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:27 vm07 bash[20848]: audit 2026-03-06T22:39:26.480691+0000 mon.vm02 (mon.0) 342 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:27.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:27 vm07 bash[20848]: audit 2026-03-06T22:39:26.484830+0000 mon.vm02 (mon.0) 343 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:27.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:27 vm07 bash[20848]: audit 2026-03-06T22:39:26.484830+0000 mon.vm02 (mon.0) 343 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:27.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:27 vm07 bash[20848]: audit 2026-03-06T22:39:26.489461+0000 mon.vm02 (mon.0) 344 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:27.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:27 vm07 bash[20848]: audit 2026-03-06T22:39:26.489461+0000 mon.vm02 (mon.0) 344 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:27.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:27 vm07 bash[20848]: audit 2026-03-06T22:39:26.644072+0000 mon.vm02 (mon.0) 345 : audit [DBG] from='client.? 192.168.123.102:0/607582902' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-06T23:39:27.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:27 vm07 bash[20848]: audit 2026-03-06T22:39:26.644072+0000 mon.vm02 (mon.0) 345 : audit [DBG] from='client.? 192.168.123.102:0/607582902' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-06T23:39:27.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:27 vm07 bash[20848]: audit 2026-03-06T22:39:26.828926+0000 mon.vm02 (mon.0) 346 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:27.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:27 vm07 bash[20848]: audit 2026-03-06T22:39:26.828926+0000 mon.vm02 (mon.0) 346 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:27.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:27 vm07 bash[20848]: audit 2026-03-06T22:39:26.833201+0000 mon.vm02 (mon.0) 347 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:27.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:27 vm07 bash[20848]: audit 2026-03-06T22:39:26.833201+0000 mon.vm02 (mon.0) 347 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:27.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:27 vm07 bash[20848]: audit 2026-03-06T22:39:26.836870+0000 mon.vm02 (mon.0) 348 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:27.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:27 vm07 bash[20848]: audit 2026-03-06T22:39:26.836870+0000 mon.vm02 (mon.0) 348 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:27.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:27 vm07 bash[20848]: audit 2026-03-06T22:39:26.840364+0000 mon.vm02 (mon.0) 349 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:27.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:27 vm07 bash[20848]: audit 2026-03-06T22:39:26.840364+0000 mon.vm02 (mon.0) 349 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:27.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:27 vm07 bash[20848]: audit 2026-03-06T22:39:26.841136+0000 mon.vm02 (mon.0) 350 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:27.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:27 vm07 bash[20848]: audit 2026-03-06T22:39:26.841136+0000 mon.vm02 (mon.0) 350 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:27.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:27 vm07 bash[20848]: audit 2026-03-06T22:39:26.841700+0000 mon.vm02 (mon.0) 351 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T23:39:27.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:27 vm07 bash[20848]: audit 2026-03-06T22:39:26.841700+0000 mon.vm02 (mon.0) 351 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T23:39:27.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:27 vm07 bash[20848]: audit 2026-03-06T22:39:26.845273+0000 mon.vm02 (mon.0) 352 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:27.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:27 vm07 bash[20848]: audit 2026-03-06T22:39:26.845273+0000 mon.vm02 (mon.0) 352 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:27.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:27 vm07 bash[20848]: audit 2026-03-06T22:39:26.847051+0000 mon.vm02 (mon.0) 353 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-06T23:39:27.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:27 vm07 bash[20848]: audit 2026-03-06T22:39:26.847051+0000 mon.vm02 (mon.0) 353 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-06T23:39:27.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:27 vm07 bash[20848]: audit 2026-03-06T22:39:26.849242+0000 mon.vm02 (mon.0) 354 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-06T23:39:27.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:27 vm07 bash[20848]: audit 2026-03-06T22:39:26.849242+0000 mon.vm02 (mon.0) 354 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-06T23:39:27.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:27 vm07 bash[20848]: audit 2026-03-06T22:39:26.849771+0000 mon.vm02 (mon.0) 355 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:27.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:27 vm07 bash[20848]: audit 2026-03-06T22:39:26.849771+0000 mon.vm02 (mon.0) 355 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:27.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:27 vm07 bash[20848]: audit 2026-03-06T22:39:26.851536+0000 mon.vm02 (mon.0) 356 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-06T23:39:27.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:27 vm07 bash[20848]: audit 2026-03-06T22:39:26.851536+0000 mon.vm02 (mon.0) 356 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-06T23:39:27.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:27 vm07 bash[20848]: audit 2026-03-06T22:39:26.852075+0000 mon.vm02 (mon.0) 357 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:27.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:27 vm07 bash[20848]: audit 2026-03-06T22:39:26.852075+0000 mon.vm02 (mon.0) 357 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:28.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:28 vm07 bash[20848]: cluster 2026-03-06T22:39:26.748882+0000 mgr.vm02.opvwec (mgr.14199) 65 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:28.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:28 vm07 bash[20848]: cluster 2026-03-06T22:39:26.748882+0000 mgr.vm02.opvwec (mgr.14199) 65 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:28.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:28 vm02 bash[17013]: cluster 2026-03-06T22:39:26.748882+0000 mgr.vm02.opvwec (mgr.14199) 65 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:28.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:28 vm02 bash[17013]: cluster 2026-03-06T22:39:26.748882+0000 mgr.vm02.opvwec (mgr.14199) 65 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:30.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:30 vm07 bash[20848]: cluster 2026-03-06T22:39:28.749084+0000 mgr.vm02.opvwec (mgr.14199) 66 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:30.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:30 vm07 bash[20848]: cluster 2026-03-06T22:39:28.749084+0000 mgr.vm02.opvwec (mgr.14199) 66 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:30.992 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:30 vm02 bash[17013]: cluster 2026-03-06T22:39:28.749084+0000 mgr.vm02.opvwec (mgr.14199) 66 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:30.992 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:30 vm02 bash[17013]: cluster 2026-03-06T22:39:28.749084+0000 mgr.vm02.opvwec (mgr.14199) 66 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:31.643 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:39:32.048 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:39:32.142 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":5,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0} 2026-03-06T23:39:32.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:32 vm07 bash[20848]: cluster 2026-03-06T22:39:30.749301+0000 mgr.vm02.opvwec (mgr.14199) 67 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:32.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:32 vm07 bash[20848]: cluster 2026-03-06T22:39:30.749301+0000 mgr.vm02.opvwec (mgr.14199) 67 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:32.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:32 vm07 bash[20848]: audit 2026-03-06T22:39:32.043501+0000 mon.vm02 (mon.0) 358 : audit [DBG] from='client.? 192.168.123.102:0/1692118242' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-06T23:39:32.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:32 vm07 bash[20848]: audit 2026-03-06T22:39:32.043501+0000 mon.vm02 (mon.0) 358 : audit [DBG] from='client.? 192.168.123.102:0/1692118242' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-06T23:39:32.992 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:32 vm02 bash[17013]: cluster 2026-03-06T22:39:30.749301+0000 mgr.vm02.opvwec (mgr.14199) 67 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:32.992 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:32 vm02 bash[17013]: cluster 2026-03-06T22:39:30.749301+0000 mgr.vm02.opvwec (mgr.14199) 67 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:32.992 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:32 vm02 bash[17013]: audit 2026-03-06T22:39:32.043501+0000 mon.vm02 (mon.0) 358 : audit [DBG] from='client.? 192.168.123.102:0/1692118242' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-06T23:39:32.992 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:32 vm02 bash[17013]: audit 2026-03-06T22:39:32.043501+0000 mon.vm02 (mon.0) 358 : audit [DBG] from='client.? 192.168.123.102:0/1692118242' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-06T23:39:33.143 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph osd stat -f json 2026-03-06T23:39:33.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:33 vm07 bash[20848]: audit 2026-03-06T22:39:32.990272+0000 mon.vm02 (mon.0) 359 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "418e828c-709f-40ee-9849-890589b82337"}]: dispatch 2026-03-06T23:39:33.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:33 vm07 bash[20848]: audit 2026-03-06T22:39:32.990272+0000 mon.vm02 (mon.0) 359 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "418e828c-709f-40ee-9849-890589b82337"}]: dispatch 2026-03-06T23:39:33.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:33 vm07 bash[20848]: audit 2026-03-06T22:39:32.993305+0000 mon.vm02 (mon.0) 360 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "418e828c-709f-40ee-9849-890589b82337"}]': finished 2026-03-06T23:39:33.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:33 vm07 bash[20848]: audit 2026-03-06T22:39:32.993305+0000 mon.vm02 (mon.0) 360 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "418e828c-709f-40ee-9849-890589b82337"}]': finished 2026-03-06T23:39:33.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:33 vm07 bash[20848]: audit 2026-03-06T22:39:32.993477+0000 mon.vm07 (mon.1) 2 : audit [INF] from='client.? 192.168.123.107:0/2281395195' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "418e828c-709f-40ee-9849-890589b82337"}]: dispatch 2026-03-06T23:39:33.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:33 vm07 bash[20848]: audit 2026-03-06T22:39:32.993477+0000 mon.vm07 (mon.1) 2 : audit [INF] from='client.? 192.168.123.107:0/2281395195' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "418e828c-709f-40ee-9849-890589b82337"}]: dispatch 2026-03-06T23:39:33.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:33 vm07 bash[20848]: cluster 2026-03-06T22:39:32.995297+0000 mon.vm02 (mon.0) 361 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-06T23:39:33.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:33 vm07 bash[20848]: cluster 2026-03-06T22:39:32.995297+0000 mon.vm02 (mon.0) 361 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-06T23:39:33.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:33 vm07 bash[20848]: audit 2026-03-06T22:39:32.997724+0000 mon.vm02 (mon.0) 362 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:39:33.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:33 vm07 bash[20848]: audit 2026-03-06T22:39:32.997724+0000 mon.vm02 (mon.0) 362 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:39:33.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:33 vm07 bash[20848]: audit 2026-03-06T22:39:33.145513+0000 mon.vm02 (mon.0) 363 : audit [INF] from='client.? 192.168.123.102:0/4284725071' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ace25c81-45bd-4eb3-b02f-ff194f355af7"}]: dispatch 2026-03-06T23:39:33.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:33 vm07 bash[20848]: audit 2026-03-06T22:39:33.145513+0000 mon.vm02 (mon.0) 363 : audit [INF] from='client.? 192.168.123.102:0/4284725071' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ace25c81-45bd-4eb3-b02f-ff194f355af7"}]: dispatch 2026-03-06T23:39:33.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:33 vm07 bash[20848]: audit 2026-03-06T22:39:33.149543+0000 mon.vm02 (mon.0) 364 : audit [INF] from='client.? 192.168.123.102:0/4284725071' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ace25c81-45bd-4eb3-b02f-ff194f355af7"}]': finished 2026-03-06T23:39:33.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:33 vm07 bash[20848]: audit 2026-03-06T22:39:33.149543+0000 mon.vm02 (mon.0) 364 : audit [INF] from='client.? 192.168.123.102:0/4284725071' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ace25c81-45bd-4eb3-b02f-ff194f355af7"}]': finished 2026-03-06T23:39:33.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:33 vm07 bash[20848]: cluster 2026-03-06T22:39:33.152112+0000 mon.vm02 (mon.0) 365 : cluster [DBG] osdmap e7: 2 total, 0 up, 2 in 2026-03-06T23:39:33.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:33 vm07 bash[20848]: cluster 2026-03-06T22:39:33.152112+0000 mon.vm02 (mon.0) 365 : cluster [DBG] osdmap e7: 2 total, 0 up, 2 in 2026-03-06T23:39:33.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:33 vm07 bash[20848]: audit 2026-03-06T22:39:33.155564+0000 mon.vm02 (mon.0) 366 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:39:33.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:33 vm07 bash[20848]: audit 2026-03-06T22:39:33.155564+0000 mon.vm02 (mon.0) 366 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:39:33.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:33 vm07 bash[20848]: audit 2026-03-06T22:39:33.155975+0000 mon.vm02 (mon.0) 367 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:39:33.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:33 vm07 bash[20848]: audit 2026-03-06T22:39:33.155975+0000 mon.vm02 (mon.0) 367 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:39:33.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:33 vm02 bash[17013]: audit 2026-03-06T22:39:32.990272+0000 mon.vm02 (mon.0) 359 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "418e828c-709f-40ee-9849-890589b82337"}]: dispatch 2026-03-06T23:39:33.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:33 vm02 bash[17013]: audit 2026-03-06T22:39:32.990272+0000 mon.vm02 (mon.0) 359 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "418e828c-709f-40ee-9849-890589b82337"}]: dispatch 2026-03-06T23:39:33.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:33 vm02 bash[17013]: audit 2026-03-06T22:39:32.993305+0000 mon.vm02 (mon.0) 360 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "418e828c-709f-40ee-9849-890589b82337"}]': finished 2026-03-06T23:39:33.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:33 vm02 bash[17013]: audit 2026-03-06T22:39:32.993305+0000 mon.vm02 (mon.0) 360 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "418e828c-709f-40ee-9849-890589b82337"}]': finished 2026-03-06T23:39:33.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:33 vm02 bash[17013]: audit 2026-03-06T22:39:32.993477+0000 mon.vm07 (mon.1) 2 : audit [INF] from='client.? 192.168.123.107:0/2281395195' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "418e828c-709f-40ee-9849-890589b82337"}]: dispatch 2026-03-06T23:39:33.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:33 vm02 bash[17013]: audit 2026-03-06T22:39:32.993477+0000 mon.vm07 (mon.1) 2 : audit [INF] from='client.? 192.168.123.107:0/2281395195' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "418e828c-709f-40ee-9849-890589b82337"}]: dispatch 2026-03-06T23:39:33.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:33 vm02 bash[17013]: cluster 2026-03-06T22:39:32.995297+0000 mon.vm02 (mon.0) 361 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-06T23:39:33.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:33 vm02 bash[17013]: cluster 2026-03-06T22:39:32.995297+0000 mon.vm02 (mon.0) 361 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-06T23:39:33.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:33 vm02 bash[17013]: audit 2026-03-06T22:39:32.997724+0000 mon.vm02 (mon.0) 362 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:39:33.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:33 vm02 bash[17013]: audit 2026-03-06T22:39:32.997724+0000 mon.vm02 (mon.0) 362 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:39:33.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:33 vm02 bash[17013]: audit 2026-03-06T22:39:33.145513+0000 mon.vm02 (mon.0) 363 : audit [INF] from='client.? 192.168.123.102:0/4284725071' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ace25c81-45bd-4eb3-b02f-ff194f355af7"}]: dispatch 2026-03-06T23:39:33.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:33 vm02 bash[17013]: audit 2026-03-06T22:39:33.145513+0000 mon.vm02 (mon.0) 363 : audit [INF] from='client.? 192.168.123.102:0/4284725071' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ace25c81-45bd-4eb3-b02f-ff194f355af7"}]: dispatch 2026-03-06T23:39:33.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:33 vm02 bash[17013]: audit 2026-03-06T22:39:33.149543+0000 mon.vm02 (mon.0) 364 : audit [INF] from='client.? 192.168.123.102:0/4284725071' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ace25c81-45bd-4eb3-b02f-ff194f355af7"}]': finished 2026-03-06T23:39:33.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:33 vm02 bash[17013]: audit 2026-03-06T22:39:33.149543+0000 mon.vm02 (mon.0) 364 : audit [INF] from='client.? 192.168.123.102:0/4284725071' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ace25c81-45bd-4eb3-b02f-ff194f355af7"}]': finished 2026-03-06T23:39:33.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:33 vm02 bash[17013]: cluster 2026-03-06T22:39:33.152112+0000 mon.vm02 (mon.0) 365 : cluster [DBG] osdmap e7: 2 total, 0 up, 2 in 2026-03-06T23:39:33.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:33 vm02 bash[17013]: cluster 2026-03-06T22:39:33.152112+0000 mon.vm02 (mon.0) 365 : cluster [DBG] osdmap e7: 2 total, 0 up, 2 in 2026-03-06T23:39:33.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:33 vm02 bash[17013]: audit 2026-03-06T22:39:33.155564+0000 mon.vm02 (mon.0) 366 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:39:33.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:33 vm02 bash[17013]: audit 2026-03-06T22:39:33.155564+0000 mon.vm02 (mon.0) 366 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:39:33.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:33 vm02 bash[17013]: audit 2026-03-06T22:39:33.155975+0000 mon.vm02 (mon.0) 367 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:39:33.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:33 vm02 bash[17013]: audit 2026-03-06T22:39:33.155975+0000 mon.vm02 (mon.0) 367 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:39:34.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:34 vm07 bash[20848]: cluster 2026-03-06T22:39:32.749537+0000 mgr.vm02.opvwec (mgr.14199) 68 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:34.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:34 vm07 bash[20848]: cluster 2026-03-06T22:39:32.749537+0000 mgr.vm02.opvwec (mgr.14199) 68 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:34.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:34 vm07 bash[20848]: audit 2026-03-06T22:39:33.581280+0000 mon.vm07 (mon.1) 3 : audit [DBG] from='client.? 192.168.123.107:0/239422145' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-06T23:39:34.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:34 vm07 bash[20848]: audit 2026-03-06T22:39:33.581280+0000 mon.vm07 (mon.1) 3 : audit [DBG] from='client.? 192.168.123.107:0/239422145' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-06T23:39:34.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:34 vm07 bash[20848]: audit 2026-03-06T22:39:33.752848+0000 mon.vm02 (mon.0) 368 : audit [DBG] from='client.? 192.168.123.102:0/104590655' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-06T23:39:34.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:34 vm07 bash[20848]: audit 2026-03-06T22:39:33.752848+0000 mon.vm02 (mon.0) 368 : audit [DBG] from='client.? 192.168.123.102:0/104590655' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-06T23:39:34.992 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:34 vm02 bash[17013]: cluster 2026-03-06T22:39:32.749537+0000 mgr.vm02.opvwec (mgr.14199) 68 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:34.992 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:34 vm02 bash[17013]: cluster 2026-03-06T22:39:32.749537+0000 mgr.vm02.opvwec (mgr.14199) 68 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:34.992 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:34 vm02 bash[17013]: audit 2026-03-06T22:39:33.581280+0000 mon.vm07 (mon.1) 3 : audit [DBG] from='client.? 192.168.123.107:0/239422145' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-06T23:39:34.992 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:34 vm02 bash[17013]: audit 2026-03-06T22:39:33.581280+0000 mon.vm07 (mon.1) 3 : audit [DBG] from='client.? 192.168.123.107:0/239422145' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-06T23:39:34.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:34 vm02 bash[17013]: audit 2026-03-06T22:39:33.752848+0000 mon.vm02 (mon.0) 368 : audit [DBG] from='client.? 192.168.123.102:0/104590655' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-06T23:39:34.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:34 vm02 bash[17013]: audit 2026-03-06T22:39:33.752848+0000 mon.vm02 (mon.0) 368 : audit [DBG] from='client.? 192.168.123.102:0/104590655' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-06T23:39:36.799 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:36 vm07 bash[20848]: cluster 2026-03-06T22:39:34.749717+0000 mgr.vm02.opvwec (mgr.14199) 69 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:36.799 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:36 vm07 bash[20848]: cluster 2026-03-06T22:39:34.749717+0000 mgr.vm02.opvwec (mgr.14199) 69 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:36.799 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:36 vm07 bash[20848]: audit 2026-03-06T22:39:35.799111+0000 mon.vm02 (mon.0) 369 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:39:36.799 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:36 vm07 bash[20848]: audit 2026-03-06T22:39:35.799111+0000 mon.vm02 (mon.0) 369 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:39:36.981 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:36 vm02 bash[17013]: cluster 2026-03-06T22:39:34.749717+0000 mgr.vm02.opvwec (mgr.14199) 69 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:36.981 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:36 vm02 bash[17013]: cluster 2026-03-06T22:39:34.749717+0000 mgr.vm02.opvwec (mgr.14199) 69 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:36.981 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:36 vm02 bash[17013]: audit 2026-03-06T22:39:35.799111+0000 mon.vm02 (mon.0) 369 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:39:36.981 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:36 vm02 bash[17013]: audit 2026-03-06T22:39:35.799111+0000 mon.vm02 (mon.0) 369 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:39:37.931 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:39:37.943 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:37 vm02 bash[17013]: audit 2026-03-06T22:39:36.694586+0000 mon.vm02 (mon.0) 370 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5e438db1-97e7-4551-a2a8-5b5117692f52"}]: dispatch 2026-03-06T23:39:37.943 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:37 vm02 bash[17013]: audit 2026-03-06T22:39:36.694586+0000 mon.vm02 (mon.0) 370 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5e438db1-97e7-4551-a2a8-5b5117692f52"}]: dispatch 2026-03-06T23:39:37.943 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:37 vm02 bash[17013]: audit 2026-03-06T22:39:36.697905+0000 mon.vm07 (mon.1) 4 : audit [INF] from='client.? 192.168.123.107:0/1093269916' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5e438db1-97e7-4551-a2a8-5b5117692f52"}]: dispatch 2026-03-06T23:39:37.943 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:37 vm02 bash[17013]: audit 2026-03-06T22:39:36.697905+0000 mon.vm07 (mon.1) 4 : audit [INF] from='client.? 192.168.123.107:0/1093269916' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5e438db1-97e7-4551-a2a8-5b5117692f52"}]: dispatch 2026-03-06T23:39:37.943 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:37 vm02 bash[17013]: audit 2026-03-06T22:39:36.698480+0000 mon.vm02 (mon.0) 371 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5e438db1-97e7-4551-a2a8-5b5117692f52"}]': finished 2026-03-06T23:39:37.943 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:37 vm02 bash[17013]: audit 2026-03-06T22:39:36.698480+0000 mon.vm02 (mon.0) 371 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5e438db1-97e7-4551-a2a8-5b5117692f52"}]': finished 2026-03-06T23:39:37.943 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:37 vm02 bash[17013]: cluster 2026-03-06T22:39:36.700657+0000 mon.vm02 (mon.0) 372 : cluster [DBG] osdmap e8: 3 total, 0 up, 3 in 2026-03-06T23:39:37.943 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:37 vm02 bash[17013]: cluster 2026-03-06T22:39:36.700657+0000 mon.vm02 (mon.0) 372 : cluster [DBG] osdmap e8: 3 total, 0 up, 3 in 2026-03-06T23:39:37.943 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:37 vm02 bash[17013]: audit 2026-03-06T22:39:36.700795+0000 mon.vm02 (mon.0) 373 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:39:37.943 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:37 vm02 bash[17013]: audit 2026-03-06T22:39:36.700795+0000 mon.vm02 (mon.0) 373 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:39:37.943 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:37 vm02 bash[17013]: audit 2026-03-06T22:39:36.700853+0000 mon.vm02 (mon.0) 374 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:39:37.943 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:37 vm02 bash[17013]: audit 2026-03-06T22:39:36.700853+0000 mon.vm02 (mon.0) 374 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:39:37.944 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:37 vm02 bash[17013]: audit 2026-03-06T22:39:36.700887+0000 mon.vm02 (mon.0) 375 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:39:37.944 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:37 vm02 bash[17013]: audit 2026-03-06T22:39:36.700887+0000 mon.vm02 (mon.0) 375 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:39:37.944 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:37 vm02 bash[17013]: audit 2026-03-06T22:39:36.879875+0000 mon.vm02 (mon.0) 376 : audit [INF] from='client.? 192.168.123.102:0/1010723040' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0b2838af-d2fd-47a1-a00c-95a72f13f66a"}]: dispatch 2026-03-06T23:39:37.944 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:37 vm02 bash[17013]: audit 2026-03-06T22:39:36.879875+0000 mon.vm02 (mon.0) 376 : audit [INF] from='client.? 192.168.123.102:0/1010723040' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0b2838af-d2fd-47a1-a00c-95a72f13f66a"}]: dispatch 2026-03-06T23:39:37.944 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:37 vm02 bash[17013]: audit 2026-03-06T22:39:36.882931+0000 mon.vm02 (mon.0) 377 : audit [INF] from='client.? 192.168.123.102:0/1010723040' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0b2838af-d2fd-47a1-a00c-95a72f13f66a"}]': finished 2026-03-06T23:39:37.944 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:37 vm02 bash[17013]: audit 2026-03-06T22:39:36.882931+0000 mon.vm02 (mon.0) 377 : audit [INF] from='client.? 192.168.123.102:0/1010723040' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0b2838af-d2fd-47a1-a00c-95a72f13f66a"}]': finished 2026-03-06T23:39:37.944 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:37 vm02 bash[17013]: cluster 2026-03-06T22:39:36.885948+0000 mon.vm02 (mon.0) 378 : cluster [DBG] osdmap e9: 4 total, 0 up, 4 in 2026-03-06T23:39:37.944 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:37 vm02 bash[17013]: cluster 2026-03-06T22:39:36.885948+0000 mon.vm02 (mon.0) 378 : cluster [DBG] osdmap e9: 4 total, 0 up, 4 in 2026-03-06T23:39:37.944 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:37 vm02 bash[17013]: audit 2026-03-06T22:39:36.886132+0000 mon.vm02 (mon.0) 379 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:39:37.944 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:37 vm02 bash[17013]: audit 2026-03-06T22:39:36.886132+0000 mon.vm02 (mon.0) 379 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:39:37.944 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:37 vm02 bash[17013]: audit 2026-03-06T22:39:36.886254+0000 mon.vm02 (mon.0) 380 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:39:37.944 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:37 vm02 bash[17013]: audit 2026-03-06T22:39:36.886254+0000 mon.vm02 (mon.0) 380 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:39:37.944 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:37 vm02 bash[17013]: audit 2026-03-06T22:39:36.886394+0000 mon.vm02 (mon.0) 381 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:39:37.944 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:37 vm02 bash[17013]: audit 2026-03-06T22:39:36.886394+0000 mon.vm02 (mon.0) 381 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:39:37.944 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:37 vm02 bash[17013]: audit 2026-03-06T22:39:36.886491+0000 mon.vm02 (mon.0) 382 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:39:37.944 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:37 vm02 bash[17013]: audit 2026-03-06T22:39:36.886491+0000 mon.vm02 (mon.0) 382 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:39:37.944 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:37 vm02 bash[17013]: audit 2026-03-06T22:39:37.310293+0000 mon.vm07 (mon.1) 5 : audit [DBG] from='client.? 192.168.123.107:0/1416420401' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-06T23:39:37.944 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:37 vm02 bash[17013]: audit 2026-03-06T22:39:37.310293+0000 mon.vm07 (mon.1) 5 : audit [DBG] from='client.? 192.168.123.107:0/1416420401' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-06T23:39:37.944 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:37 vm02 bash[17013]: audit 2026-03-06T22:39:37.493844+0000 mon.vm02 (mon.0) 383 : audit [DBG] from='client.? 192.168.123.102:0/3163135566' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-06T23:39:37.944 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:37 vm02 bash[17013]: audit 2026-03-06T22:39:37.493844+0000 mon.vm02 (mon.0) 383 : audit [DBG] from='client.? 192.168.123.102:0/3163135566' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-06T23:39:37.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:37 vm07 bash[20848]: audit 2026-03-06T22:39:36.694586+0000 mon.vm02 (mon.0) 370 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5e438db1-97e7-4551-a2a8-5b5117692f52"}]: dispatch 2026-03-06T23:39:37.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:37 vm07 bash[20848]: audit 2026-03-06T22:39:36.694586+0000 mon.vm02 (mon.0) 370 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5e438db1-97e7-4551-a2a8-5b5117692f52"}]: dispatch 2026-03-06T23:39:37.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:37 vm07 bash[20848]: audit 2026-03-06T22:39:36.697905+0000 mon.vm07 (mon.1) 4 : audit [INF] from='client.? 192.168.123.107:0/1093269916' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5e438db1-97e7-4551-a2a8-5b5117692f52"}]: dispatch 2026-03-06T23:39:37.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:37 vm07 bash[20848]: audit 2026-03-06T22:39:36.697905+0000 mon.vm07 (mon.1) 4 : audit [INF] from='client.? 192.168.123.107:0/1093269916' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5e438db1-97e7-4551-a2a8-5b5117692f52"}]: dispatch 2026-03-06T23:39:37.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:37 vm07 bash[20848]: audit 2026-03-06T22:39:36.698480+0000 mon.vm02 (mon.0) 371 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5e438db1-97e7-4551-a2a8-5b5117692f52"}]': finished 2026-03-06T23:39:37.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:37 vm07 bash[20848]: audit 2026-03-06T22:39:36.698480+0000 mon.vm02 (mon.0) 371 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5e438db1-97e7-4551-a2a8-5b5117692f52"}]': finished 2026-03-06T23:39:37.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:37 vm07 bash[20848]: cluster 2026-03-06T22:39:36.700657+0000 mon.vm02 (mon.0) 372 : cluster [DBG] osdmap e8: 3 total, 0 up, 3 in 2026-03-06T23:39:37.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:37 vm07 bash[20848]: cluster 2026-03-06T22:39:36.700657+0000 mon.vm02 (mon.0) 372 : cluster [DBG] osdmap e8: 3 total, 0 up, 3 in 2026-03-06T23:39:37.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:37 vm07 bash[20848]: audit 2026-03-06T22:39:36.700795+0000 mon.vm02 (mon.0) 373 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:39:37.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:37 vm07 bash[20848]: audit 2026-03-06T22:39:36.700795+0000 mon.vm02 (mon.0) 373 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:39:37.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:37 vm07 bash[20848]: audit 2026-03-06T22:39:36.700853+0000 mon.vm02 (mon.0) 374 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:39:37.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:37 vm07 bash[20848]: audit 2026-03-06T22:39:36.700853+0000 mon.vm02 (mon.0) 374 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:39:37.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:37 vm07 bash[20848]: audit 2026-03-06T22:39:36.700887+0000 mon.vm02 (mon.0) 375 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:39:37.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:37 vm07 bash[20848]: audit 2026-03-06T22:39:36.700887+0000 mon.vm02 (mon.0) 375 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:39:37.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:37 vm07 bash[20848]: audit 2026-03-06T22:39:36.879875+0000 mon.vm02 (mon.0) 376 : audit [INF] from='client.? 192.168.123.102:0/1010723040' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0b2838af-d2fd-47a1-a00c-95a72f13f66a"}]: dispatch 2026-03-06T23:39:37.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:37 vm07 bash[20848]: audit 2026-03-06T22:39:36.879875+0000 mon.vm02 (mon.0) 376 : audit [INF] from='client.? 192.168.123.102:0/1010723040' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0b2838af-d2fd-47a1-a00c-95a72f13f66a"}]: dispatch 2026-03-06T23:39:37.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:37 vm07 bash[20848]: audit 2026-03-06T22:39:36.882931+0000 mon.vm02 (mon.0) 377 : audit [INF] from='client.? 192.168.123.102:0/1010723040' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0b2838af-d2fd-47a1-a00c-95a72f13f66a"}]': finished 2026-03-06T23:39:37.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:37 vm07 bash[20848]: audit 2026-03-06T22:39:36.882931+0000 mon.vm02 (mon.0) 377 : audit [INF] from='client.? 192.168.123.102:0/1010723040' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0b2838af-d2fd-47a1-a00c-95a72f13f66a"}]': finished 2026-03-06T23:39:37.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:37 vm07 bash[20848]: cluster 2026-03-06T22:39:36.885948+0000 mon.vm02 (mon.0) 378 : cluster [DBG] osdmap e9: 4 total, 0 up, 4 in 2026-03-06T23:39:37.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:37 vm07 bash[20848]: cluster 2026-03-06T22:39:36.885948+0000 mon.vm02 (mon.0) 378 : cluster [DBG] osdmap e9: 4 total, 0 up, 4 in 2026-03-06T23:39:37.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:37 vm07 bash[20848]: audit 2026-03-06T22:39:36.886132+0000 mon.vm02 (mon.0) 379 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:39:37.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:37 vm07 bash[20848]: audit 2026-03-06T22:39:36.886132+0000 mon.vm02 (mon.0) 379 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:39:37.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:37 vm07 bash[20848]: audit 2026-03-06T22:39:36.886254+0000 mon.vm02 (mon.0) 380 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:39:37.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:37 vm07 bash[20848]: audit 2026-03-06T22:39:36.886254+0000 mon.vm02 (mon.0) 380 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:39:37.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:37 vm07 bash[20848]: audit 2026-03-06T22:39:36.886394+0000 mon.vm02 (mon.0) 381 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:39:37.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:37 vm07 bash[20848]: audit 2026-03-06T22:39:36.886394+0000 mon.vm02 (mon.0) 381 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:39:37.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:37 vm07 bash[20848]: audit 2026-03-06T22:39:36.886491+0000 mon.vm02 (mon.0) 382 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:39:37.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:37 vm07 bash[20848]: audit 2026-03-06T22:39:36.886491+0000 mon.vm02 (mon.0) 382 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:39:37.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:37 vm07 bash[20848]: audit 2026-03-06T22:39:37.310293+0000 mon.vm07 (mon.1) 5 : audit [DBG] from='client.? 192.168.123.107:0/1416420401' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-06T23:39:37.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:37 vm07 bash[20848]: audit 2026-03-06T22:39:37.310293+0000 mon.vm07 (mon.1) 5 : audit [DBG] from='client.? 192.168.123.107:0/1416420401' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-06T23:39:37.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:37 vm07 bash[20848]: audit 2026-03-06T22:39:37.493844+0000 mon.vm02 (mon.0) 383 : audit [DBG] from='client.? 192.168.123.102:0/3163135566' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-06T23:39:37.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:37 vm07 bash[20848]: audit 2026-03-06T22:39:37.493844+0000 mon.vm02 (mon.0) 383 : audit [DBG] from='client.? 192.168.123.102:0/3163135566' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-06T23:39:38.284 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:39:38.342 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":9,"num_osds":4,"num_up_osds":0,"osd_up_since":0,"num_in_osds":4,"osd_in_since":1772836776,"num_remapped_pgs":0} 2026-03-06T23:39:38.562 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:38 vm02 bash[17013]: cluster 2026-03-06T22:39:36.749866+0000 mgr.vm02.opvwec (mgr.14199) 70 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:38.562 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:38 vm02 bash[17013]: cluster 2026-03-06T22:39:36.749866+0000 mgr.vm02.opvwec (mgr.14199) 70 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:38.562 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:38 vm02 bash[17013]: audit 2026-03-06T22:39:38.279030+0000 mon.vm02 (mon.0) 384 : audit [DBG] from='client.? 192.168.123.102:0/1690662955' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-06T23:39:38.562 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:38 vm02 bash[17013]: audit 2026-03-06T22:39:38.279030+0000 mon.vm02 (mon.0) 384 : audit [DBG] from='client.? 192.168.123.102:0/1690662955' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-06T23:39:38.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:38 vm07 bash[20848]: cluster 2026-03-06T22:39:36.749866+0000 mgr.vm02.opvwec (mgr.14199) 70 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:38.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:38 vm07 bash[20848]: cluster 2026-03-06T22:39:36.749866+0000 mgr.vm02.opvwec (mgr.14199) 70 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:38.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:38 vm07 bash[20848]: audit 2026-03-06T22:39:38.279030+0000 mon.vm02 (mon.0) 384 : audit [DBG] from='client.? 192.168.123.102:0/1690662955' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-06T23:39:38.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:38 vm07 bash[20848]: audit 2026-03-06T22:39:38.279030+0000 mon.vm02 (mon.0) 384 : audit [DBG] from='client.? 192.168.123.102:0/1690662955' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-06T23:39:39.343 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph osd stat -f json 2026-03-06T23:39:40.814 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:40 vm07 bash[20848]: cluster 2026-03-06T22:39:38.750004+0000 mgr.vm02.opvwec (mgr.14199) 71 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:40.814 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:40 vm07 bash[20848]: cluster 2026-03-06T22:39:38.750004+0000 mgr.vm02.opvwec (mgr.14199) 71 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:40.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:40 vm02 bash[17013]: cluster 2026-03-06T22:39:38.750004+0000 mgr.vm02.opvwec (mgr.14199) 71 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:40.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:40 vm02 bash[17013]: cluster 2026-03-06T22:39:38.750004+0000 mgr.vm02.opvwec (mgr.14199) 71 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:41.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:41 vm07 bash[20848]: audit 2026-03-06T22:39:40.628100+0000 mon.vm02 (mon.0) 385 : audit [INF] from='client.? 192.168.123.102:0/463544215' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8af0222d-7b05-4f10-a678-5f0008c2f8f8"}]: dispatch 2026-03-06T23:39:41.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:41 vm07 bash[20848]: audit 2026-03-06T22:39:40.628100+0000 mon.vm02 (mon.0) 385 : audit [INF] from='client.? 192.168.123.102:0/463544215' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8af0222d-7b05-4f10-a678-5f0008c2f8f8"}]: dispatch 2026-03-06T23:39:41.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:41 vm07 bash[20848]: audit 2026-03-06T22:39:40.630774+0000 mon.vm02 (mon.0) 386 : audit [INF] from='client.? 192.168.123.102:0/463544215' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8af0222d-7b05-4f10-a678-5f0008c2f8f8"}]': finished 2026-03-06T23:39:41.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:41 vm07 bash[20848]: audit 2026-03-06T22:39:40.630774+0000 mon.vm02 (mon.0) 386 : audit [INF] from='client.? 192.168.123.102:0/463544215' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8af0222d-7b05-4f10-a678-5f0008c2f8f8"}]': finished 2026-03-06T23:39:41.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:41 vm07 bash[20848]: cluster 2026-03-06T22:39:40.632599+0000 mon.vm02 (mon.0) 387 : cluster [DBG] osdmap e10: 5 total, 0 up, 5 in 2026-03-06T23:39:41.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:41 vm07 bash[20848]: cluster 2026-03-06T22:39:40.632599+0000 mon.vm02 (mon.0) 387 : cluster [DBG] osdmap e10: 5 total, 0 up, 5 in 2026-03-06T23:39:41.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:41 vm07 bash[20848]: audit 2026-03-06T22:39:40.632745+0000 mon.vm02 (mon.0) 388 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:39:41.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:41 vm07 bash[20848]: audit 2026-03-06T22:39:40.632745+0000 mon.vm02 (mon.0) 388 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:39:41.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:41 vm07 bash[20848]: audit 2026-03-06T22:39:40.632810+0000 mon.vm02 (mon.0) 389 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:39:41.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:41 vm07 bash[20848]: audit 2026-03-06T22:39:40.632810+0000 mon.vm02 (mon.0) 389 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:39:41.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:41 vm07 bash[20848]: audit 2026-03-06T22:39:40.632859+0000 mon.vm02 (mon.0) 390 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:39:41.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:41 vm07 bash[20848]: audit 2026-03-06T22:39:40.632859+0000 mon.vm02 (mon.0) 390 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:39:41.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:41 vm07 bash[20848]: audit 2026-03-06T22:39:40.632936+0000 mon.vm02 (mon.0) 391 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:39:41.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:41 vm07 bash[20848]: audit 2026-03-06T22:39:40.632936+0000 mon.vm02 (mon.0) 391 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:39:41.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:41 vm07 bash[20848]: audit 2026-03-06T22:39:40.632981+0000 mon.vm02 (mon.0) 392 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:39:41.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:41 vm07 bash[20848]: audit 2026-03-06T22:39:40.632981+0000 mon.vm02 (mon.0) 392 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:39:41.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:41 vm07 bash[20848]: audit 2026-03-06T22:39:40.680931+0000 mon.vm02 (mon.0) 393 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e23d8375-e171-457c-a818-baefaf27ce5c"}]: dispatch 2026-03-06T23:39:41.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:41 vm07 bash[20848]: audit 2026-03-06T22:39:40.680931+0000 mon.vm02 (mon.0) 393 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e23d8375-e171-457c-a818-baefaf27ce5c"}]: dispatch 2026-03-06T23:39:41.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:41 vm07 bash[20848]: audit 2026-03-06T22:39:40.683216+0000 mon.vm02 (mon.0) 394 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e23d8375-e171-457c-a818-baefaf27ce5c"}]': finished 2026-03-06T23:39:41.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:41 vm07 bash[20848]: audit 2026-03-06T22:39:40.683216+0000 mon.vm02 (mon.0) 394 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e23d8375-e171-457c-a818-baefaf27ce5c"}]': finished 2026-03-06T23:39:41.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:41 vm07 bash[20848]: audit 2026-03-06T22:39:40.684192+0000 mon.vm07 (mon.1) 6 : audit [INF] from='client.? 192.168.123.107:0/3281179034' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e23d8375-e171-457c-a818-baefaf27ce5c"}]: dispatch 2026-03-06T23:39:41.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:41 vm07 bash[20848]: audit 2026-03-06T22:39:40.684192+0000 mon.vm07 (mon.1) 6 : audit [INF] from='client.? 192.168.123.107:0/3281179034' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e23d8375-e171-457c-a818-baefaf27ce5c"}]: dispatch 2026-03-06T23:39:41.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:41 vm07 bash[20848]: cluster 2026-03-06T22:39:40.685073+0000 mon.vm02 (mon.0) 395 : cluster [DBG] osdmap e11: 6 total, 0 up, 6 in 2026-03-06T23:39:41.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:41 vm07 bash[20848]: cluster 2026-03-06T22:39:40.685073+0000 mon.vm02 (mon.0) 395 : cluster [DBG] osdmap e11: 6 total, 0 up, 6 in 2026-03-06T23:39:41.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:41 vm07 bash[20848]: audit 2026-03-06T22:39:40.685218+0000 mon.vm02 (mon.0) 396 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:39:41.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:41 vm07 bash[20848]: audit 2026-03-06T22:39:40.685218+0000 mon.vm02 (mon.0) 396 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:39:41.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:41 vm07 bash[20848]: audit 2026-03-06T22:39:40.685281+0000 mon.vm02 (mon.0) 397 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:39:41.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:41 vm07 bash[20848]: audit 2026-03-06T22:39:40.685281+0000 mon.vm02 (mon.0) 397 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:39:41.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:41 vm07 bash[20848]: audit 2026-03-06T22:39:40.685326+0000 mon.vm02 (mon.0) 398 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:39:41.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:41 vm07 bash[20848]: audit 2026-03-06T22:39:40.685326+0000 mon.vm02 (mon.0) 398 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:39:41.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:41 vm07 bash[20848]: audit 2026-03-06T22:39:40.685385+0000 mon.vm02 (mon.0) 399 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:39:41.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:41 vm07 bash[20848]: audit 2026-03-06T22:39:40.685385+0000 mon.vm02 (mon.0) 399 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:39:41.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:41 vm07 bash[20848]: audit 2026-03-06T22:39:40.685434+0000 mon.vm02 (mon.0) 400 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:39:41.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:41 vm07 bash[20848]: audit 2026-03-06T22:39:40.685434+0000 mon.vm02 (mon.0) 400 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:39:41.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:41 vm07 bash[20848]: audit 2026-03-06T22:39:40.685514+0000 mon.vm02 (mon.0) 401 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:39:41.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:41 vm07 bash[20848]: audit 2026-03-06T22:39:40.685514+0000 mon.vm02 (mon.0) 401 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:39:41.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:41 vm07 bash[20848]: audit 2026-03-06T22:39:41.241570+0000 mon.vm02 (mon.0) 402 : audit [DBG] from='client.? 192.168.123.102:0/1341558883' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-06T23:39:41.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:41 vm07 bash[20848]: audit 2026-03-06T22:39:41.241570+0000 mon.vm02 (mon.0) 402 : audit [DBG] from='client.? 192.168.123.102:0/1341558883' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-06T23:39:41.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:41 vm07 bash[20848]: audit 2026-03-06T22:39:41.304595+0000 mon.vm07 (mon.1) 7 : audit [DBG] from='client.? 192.168.123.107:0/2730877423' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-06T23:39:41.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:41 vm07 bash[20848]: audit 2026-03-06T22:39:41.304595+0000 mon.vm07 (mon.1) 7 : audit [DBG] from='client.? 192.168.123.107:0/2730877423' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-06T23:39:41.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:41 vm02 bash[17013]: audit 2026-03-06T22:39:40.628100+0000 mon.vm02 (mon.0) 385 : audit [INF] from='client.? 192.168.123.102:0/463544215' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8af0222d-7b05-4f10-a678-5f0008c2f8f8"}]: dispatch 2026-03-06T23:39:41.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:41 vm02 bash[17013]: audit 2026-03-06T22:39:40.628100+0000 mon.vm02 (mon.0) 385 : audit [INF] from='client.? 192.168.123.102:0/463544215' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8af0222d-7b05-4f10-a678-5f0008c2f8f8"}]: dispatch 2026-03-06T23:39:41.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:41 vm02 bash[17013]: audit 2026-03-06T22:39:40.630774+0000 mon.vm02 (mon.0) 386 : audit [INF] from='client.? 192.168.123.102:0/463544215' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8af0222d-7b05-4f10-a678-5f0008c2f8f8"}]': finished 2026-03-06T23:39:41.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:41 vm02 bash[17013]: audit 2026-03-06T22:39:40.630774+0000 mon.vm02 (mon.0) 386 : audit [INF] from='client.? 192.168.123.102:0/463544215' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8af0222d-7b05-4f10-a678-5f0008c2f8f8"}]': finished 2026-03-06T23:39:41.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:41 vm02 bash[17013]: cluster 2026-03-06T22:39:40.632599+0000 mon.vm02 (mon.0) 387 : cluster [DBG] osdmap e10: 5 total, 0 up, 5 in 2026-03-06T23:39:41.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:41 vm02 bash[17013]: cluster 2026-03-06T22:39:40.632599+0000 mon.vm02 (mon.0) 387 : cluster [DBG] osdmap e10: 5 total, 0 up, 5 in 2026-03-06T23:39:41.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:41 vm02 bash[17013]: audit 2026-03-06T22:39:40.632745+0000 mon.vm02 (mon.0) 388 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:39:41.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:41 vm02 bash[17013]: audit 2026-03-06T22:39:40.632745+0000 mon.vm02 (mon.0) 388 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:39:41.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:41 vm02 bash[17013]: audit 2026-03-06T22:39:40.632810+0000 mon.vm02 (mon.0) 389 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:39:41.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:41 vm02 bash[17013]: audit 2026-03-06T22:39:40.632810+0000 mon.vm02 (mon.0) 389 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:39:41.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:41 vm02 bash[17013]: audit 2026-03-06T22:39:40.632859+0000 mon.vm02 (mon.0) 390 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:39:41.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:41 vm02 bash[17013]: audit 2026-03-06T22:39:40.632859+0000 mon.vm02 (mon.0) 390 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:39:41.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:41 vm02 bash[17013]: audit 2026-03-06T22:39:40.632936+0000 mon.vm02 (mon.0) 391 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:39:41.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:41 vm02 bash[17013]: audit 2026-03-06T22:39:40.632936+0000 mon.vm02 (mon.0) 391 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:39:41.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:41 vm02 bash[17013]: audit 2026-03-06T22:39:40.632981+0000 mon.vm02 (mon.0) 392 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:39:41.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:41 vm02 bash[17013]: audit 2026-03-06T22:39:40.632981+0000 mon.vm02 (mon.0) 392 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:39:41.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:41 vm02 bash[17013]: audit 2026-03-06T22:39:40.680931+0000 mon.vm02 (mon.0) 393 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e23d8375-e171-457c-a818-baefaf27ce5c"}]: dispatch 2026-03-06T23:39:41.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:41 vm02 bash[17013]: audit 2026-03-06T22:39:40.680931+0000 mon.vm02 (mon.0) 393 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e23d8375-e171-457c-a818-baefaf27ce5c"}]: dispatch 2026-03-06T23:39:41.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:41 vm02 bash[17013]: audit 2026-03-06T22:39:40.683216+0000 mon.vm02 (mon.0) 394 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e23d8375-e171-457c-a818-baefaf27ce5c"}]': finished 2026-03-06T23:39:41.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:41 vm02 bash[17013]: audit 2026-03-06T22:39:40.683216+0000 mon.vm02 (mon.0) 394 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e23d8375-e171-457c-a818-baefaf27ce5c"}]': finished 2026-03-06T23:39:41.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:41 vm02 bash[17013]: audit 2026-03-06T22:39:40.684192+0000 mon.vm07 (mon.1) 6 : audit [INF] from='client.? 192.168.123.107:0/3281179034' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e23d8375-e171-457c-a818-baefaf27ce5c"}]: dispatch 2026-03-06T23:39:41.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:41 vm02 bash[17013]: audit 2026-03-06T22:39:40.684192+0000 mon.vm07 (mon.1) 6 : audit [INF] from='client.? 192.168.123.107:0/3281179034' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e23d8375-e171-457c-a818-baefaf27ce5c"}]: dispatch 2026-03-06T23:39:41.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:41 vm02 bash[17013]: cluster 2026-03-06T22:39:40.685073+0000 mon.vm02 (mon.0) 395 : cluster [DBG] osdmap e11: 6 total, 0 up, 6 in 2026-03-06T23:39:41.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:41 vm02 bash[17013]: cluster 2026-03-06T22:39:40.685073+0000 mon.vm02 (mon.0) 395 : cluster [DBG] osdmap e11: 6 total, 0 up, 6 in 2026-03-06T23:39:41.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:41 vm02 bash[17013]: audit 2026-03-06T22:39:40.685218+0000 mon.vm02 (mon.0) 396 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:39:41.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:41 vm02 bash[17013]: audit 2026-03-06T22:39:40.685218+0000 mon.vm02 (mon.0) 396 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:39:41.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:41 vm02 bash[17013]: audit 2026-03-06T22:39:40.685281+0000 mon.vm02 (mon.0) 397 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:39:41.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:41 vm02 bash[17013]: audit 2026-03-06T22:39:40.685281+0000 mon.vm02 (mon.0) 397 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:39:41.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:41 vm02 bash[17013]: audit 2026-03-06T22:39:40.685326+0000 mon.vm02 (mon.0) 398 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:39:41.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:41 vm02 bash[17013]: audit 2026-03-06T22:39:40.685326+0000 mon.vm02 (mon.0) 398 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:39:41.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:41 vm02 bash[17013]: audit 2026-03-06T22:39:40.685385+0000 mon.vm02 (mon.0) 399 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:39:41.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:41 vm02 bash[17013]: audit 2026-03-06T22:39:40.685385+0000 mon.vm02 (mon.0) 399 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:39:41.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:41 vm02 bash[17013]: audit 2026-03-06T22:39:40.685434+0000 mon.vm02 (mon.0) 400 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:39:41.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:41 vm02 bash[17013]: audit 2026-03-06T22:39:40.685434+0000 mon.vm02 (mon.0) 400 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:39:41.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:41 vm02 bash[17013]: audit 2026-03-06T22:39:40.685514+0000 mon.vm02 (mon.0) 401 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:39:41.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:41 vm02 bash[17013]: audit 2026-03-06T22:39:40.685514+0000 mon.vm02 (mon.0) 401 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:39:41.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:41 vm02 bash[17013]: audit 2026-03-06T22:39:41.241570+0000 mon.vm02 (mon.0) 402 : audit [DBG] from='client.? 192.168.123.102:0/1341558883' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-06T23:39:41.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:41 vm02 bash[17013]: audit 2026-03-06T22:39:41.241570+0000 mon.vm02 (mon.0) 402 : audit [DBG] from='client.? 192.168.123.102:0/1341558883' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-06T23:39:41.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:41 vm02 bash[17013]: audit 2026-03-06T22:39:41.304595+0000 mon.vm07 (mon.1) 7 : audit [DBG] from='client.? 192.168.123.107:0/2730877423' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-06T23:39:41.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:41 vm02 bash[17013]: audit 2026-03-06T22:39:41.304595+0000 mon.vm07 (mon.1) 7 : audit [DBG] from='client.? 192.168.123.107:0/2730877423' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-06T23:39:42.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:42 vm07 bash[20848]: cluster 2026-03-06T22:39:40.750162+0000 mgr.vm02.opvwec (mgr.14199) 72 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:42.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:42 vm07 bash[20848]: cluster 2026-03-06T22:39:40.750162+0000 mgr.vm02.opvwec (mgr.14199) 72 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:42.992 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:42 vm02 bash[17013]: cluster 2026-03-06T22:39:40.750162+0000 mgr.vm02.opvwec (mgr.14199) 72 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:42.992 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:42 vm02 bash[17013]: cluster 2026-03-06T22:39:40.750162+0000 mgr.vm02.opvwec (mgr.14199) 72 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:44.110 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:39:44.469 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:39:44.555 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":11,"num_osds":6,"num_up_osds":0,"osd_up_since":0,"num_in_osds":6,"osd_in_since":1772836780,"num_remapped_pgs":0} 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: cluster 2026-03-06T22:39:42.750375+0000 mgr.vm02.opvwec (mgr.14199) 73 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: cluster 2026-03-06T22:39:42.750375+0000 mgr.vm02.opvwec (mgr.14199) 73 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: audit 2026-03-06T22:39:44.464060+0000 mon.vm02 (mon.0) 403 : audit [DBG] from='client.? 192.168.123.102:0/2154979934' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: audit 2026-03-06T22:39:44.464060+0000 mon.vm02 (mon.0) 403 : audit [DBG] from='client.? 192.168.123.102:0/2154979934' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: audit 2026-03-06T22:39:44.673686+0000 mon.vm02 (mon.0) 404 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "323a807a-94bd-4543-a9ad-add56a77e9da"}]: dispatch 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: audit 2026-03-06T22:39:44.673686+0000 mon.vm02 (mon.0) 404 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "323a807a-94bd-4543-a9ad-add56a77e9da"}]: dispatch 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: audit 2026-03-06T22:39:44.676798+0000 mon.vm07 (mon.1) 8 : audit [INF] from='client.? 192.168.123.107:0/600598863' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "323a807a-94bd-4543-a9ad-add56a77e9da"}]: dispatch 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: audit 2026-03-06T22:39:44.676798+0000 mon.vm07 (mon.1) 8 : audit [INF] from='client.? 192.168.123.107:0/600598863' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "323a807a-94bd-4543-a9ad-add56a77e9da"}]: dispatch 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: audit 2026-03-06T22:39:44.676896+0000 mon.vm02 (mon.0) 405 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "323a807a-94bd-4543-a9ad-add56a77e9da"}]': finished 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: audit 2026-03-06T22:39:44.676896+0000 mon.vm02 (mon.0) 405 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "323a807a-94bd-4543-a9ad-add56a77e9da"}]': finished 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: cluster 2026-03-06T22:39:44.679363+0000 mon.vm02 (mon.0) 406 : cluster [DBG] osdmap e12: 7 total, 0 up, 7 in 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: cluster 2026-03-06T22:39:44.679363+0000 mon.vm02 (mon.0) 406 : cluster [DBG] osdmap e12: 7 total, 0 up, 7 in 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: audit 2026-03-06T22:39:44.679965+0000 mon.vm02 (mon.0) 407 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: audit 2026-03-06T22:39:44.679965+0000 mon.vm02 (mon.0) 407 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: audit 2026-03-06T22:39:44.680350+0000 mon.vm02 (mon.0) 408 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: audit 2026-03-06T22:39:44.680350+0000 mon.vm02 (mon.0) 408 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: audit 2026-03-06T22:39:44.680596+0000 mon.vm02 (mon.0) 409 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: audit 2026-03-06T22:39:44.680596+0000 mon.vm02 (mon.0) 409 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: audit 2026-03-06T22:39:44.680824+0000 mon.vm02 (mon.0) 410 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: audit 2026-03-06T22:39:44.680824+0000 mon.vm02 (mon.0) 410 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: audit 2026-03-06T22:39:44.681087+0000 mon.vm02 (mon.0) 411 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: audit 2026-03-06T22:39:44.681087+0000 mon.vm02 (mon.0) 411 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: audit 2026-03-06T22:39:44.681314+0000 mon.vm02 (mon.0) 412 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: audit 2026-03-06T22:39:44.681314+0000 mon.vm02 (mon.0) 412 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: audit 2026-03-06T22:39:44.681558+0000 mon.vm02 (mon.0) 413 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: audit 2026-03-06T22:39:44.681558+0000 mon.vm02 (mon.0) 413 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: audit 2026-03-06T22:39:44.682388+0000 mon.vm02 (mon.0) 414 : audit [INF] from='client.? 192.168.123.102:0/1075387588' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6a439063-03a8-4958-811b-6a2933fe0919"}]: dispatch 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: audit 2026-03-06T22:39:44.682388+0000 mon.vm02 (mon.0) 414 : audit [INF] from='client.? 192.168.123.102:0/1075387588' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6a439063-03a8-4958-811b-6a2933fe0919"}]: dispatch 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: audit 2026-03-06T22:39:44.684589+0000 mon.vm02 (mon.0) 415 : audit [INF] from='client.? 192.168.123.102:0/1075387588' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "6a439063-03a8-4958-811b-6a2933fe0919"}]': finished 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: audit 2026-03-06T22:39:44.684589+0000 mon.vm02 (mon.0) 415 : audit [INF] from='client.? 192.168.123.102:0/1075387588' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "6a439063-03a8-4958-811b-6a2933fe0919"}]': finished 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: cluster 2026-03-06T22:39:44.687388+0000 mon.vm02 (mon.0) 416 : cluster [DBG] osdmap e13: 8 total, 0 up, 8 in 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: cluster 2026-03-06T22:39:44.687388+0000 mon.vm02 (mon.0) 416 : cluster [DBG] osdmap e13: 8 total, 0 up, 8 in 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: audit 2026-03-06T22:39:44.687640+0000 mon.vm02 (mon.0) 417 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: audit 2026-03-06T22:39:44.687640+0000 mon.vm02 (mon.0) 417 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: audit 2026-03-06T22:39:44.687974+0000 mon.vm02 (mon.0) 418 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: audit 2026-03-06T22:39:44.687974+0000 mon.vm02 (mon.0) 418 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: audit 2026-03-06T22:39:44.688215+0000 mon.vm02 (mon.0) 419 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: audit 2026-03-06T22:39:44.688215+0000 mon.vm02 (mon.0) 419 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: audit 2026-03-06T22:39:44.688715+0000 mon.vm02 (mon.0) 420 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: audit 2026-03-06T22:39:44.688715+0000 mon.vm02 (mon.0) 420 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: audit 2026-03-06T22:39:44.688922+0000 mon.vm02 (mon.0) 421 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: audit 2026-03-06T22:39:44.688922+0000 mon.vm02 (mon.0) 421 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:39:44.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: audit 2026-03-06T22:39:44.689241+0000 mon.vm02 (mon.0) 422 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:39:44.744 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: audit 2026-03-06T22:39:44.689241+0000 mon.vm02 (mon.0) 422 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:39:44.744 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: audit 2026-03-06T22:39:44.689538+0000 mon.vm02 (mon.0) 423 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:39:44.744 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:44 vm02 bash[17013]: audit 2026-03-06T22:39:44.689538+0000 mon.vm02 (mon.0) 423 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: cluster 2026-03-06T22:39:42.750375+0000 mgr.vm02.opvwec (mgr.14199) 73 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: cluster 2026-03-06T22:39:42.750375+0000 mgr.vm02.opvwec (mgr.14199) 73 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: audit 2026-03-06T22:39:44.464060+0000 mon.vm02 (mon.0) 403 : audit [DBG] from='client.? 192.168.123.102:0/2154979934' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: audit 2026-03-06T22:39:44.464060+0000 mon.vm02 (mon.0) 403 : audit [DBG] from='client.? 192.168.123.102:0/2154979934' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: audit 2026-03-06T22:39:44.673686+0000 mon.vm02 (mon.0) 404 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "323a807a-94bd-4543-a9ad-add56a77e9da"}]: dispatch 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: audit 2026-03-06T22:39:44.673686+0000 mon.vm02 (mon.0) 404 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "323a807a-94bd-4543-a9ad-add56a77e9da"}]: dispatch 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: audit 2026-03-06T22:39:44.676798+0000 mon.vm07 (mon.1) 8 : audit [INF] from='client.? 192.168.123.107:0/600598863' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "323a807a-94bd-4543-a9ad-add56a77e9da"}]: dispatch 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: audit 2026-03-06T22:39:44.676798+0000 mon.vm07 (mon.1) 8 : audit [INF] from='client.? 192.168.123.107:0/600598863' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "323a807a-94bd-4543-a9ad-add56a77e9da"}]: dispatch 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: audit 2026-03-06T22:39:44.676896+0000 mon.vm02 (mon.0) 405 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "323a807a-94bd-4543-a9ad-add56a77e9da"}]': finished 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: audit 2026-03-06T22:39:44.676896+0000 mon.vm02 (mon.0) 405 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "323a807a-94bd-4543-a9ad-add56a77e9da"}]': finished 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: cluster 2026-03-06T22:39:44.679363+0000 mon.vm02 (mon.0) 406 : cluster [DBG] osdmap e12: 7 total, 0 up, 7 in 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: cluster 2026-03-06T22:39:44.679363+0000 mon.vm02 (mon.0) 406 : cluster [DBG] osdmap e12: 7 total, 0 up, 7 in 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: audit 2026-03-06T22:39:44.679965+0000 mon.vm02 (mon.0) 407 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: audit 2026-03-06T22:39:44.679965+0000 mon.vm02 (mon.0) 407 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: audit 2026-03-06T22:39:44.680350+0000 mon.vm02 (mon.0) 408 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: audit 2026-03-06T22:39:44.680350+0000 mon.vm02 (mon.0) 408 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: audit 2026-03-06T22:39:44.680596+0000 mon.vm02 (mon.0) 409 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: audit 2026-03-06T22:39:44.680596+0000 mon.vm02 (mon.0) 409 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: audit 2026-03-06T22:39:44.680824+0000 mon.vm02 (mon.0) 410 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: audit 2026-03-06T22:39:44.680824+0000 mon.vm02 (mon.0) 410 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: audit 2026-03-06T22:39:44.681087+0000 mon.vm02 (mon.0) 411 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: audit 2026-03-06T22:39:44.681087+0000 mon.vm02 (mon.0) 411 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: audit 2026-03-06T22:39:44.681314+0000 mon.vm02 (mon.0) 412 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: audit 2026-03-06T22:39:44.681314+0000 mon.vm02 (mon.0) 412 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: audit 2026-03-06T22:39:44.681558+0000 mon.vm02 (mon.0) 413 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: audit 2026-03-06T22:39:44.681558+0000 mon.vm02 (mon.0) 413 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: audit 2026-03-06T22:39:44.682388+0000 mon.vm02 (mon.0) 414 : audit [INF] from='client.? 192.168.123.102:0/1075387588' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6a439063-03a8-4958-811b-6a2933fe0919"}]: dispatch 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: audit 2026-03-06T22:39:44.682388+0000 mon.vm02 (mon.0) 414 : audit [INF] from='client.? 192.168.123.102:0/1075387588' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6a439063-03a8-4958-811b-6a2933fe0919"}]: dispatch 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: audit 2026-03-06T22:39:44.684589+0000 mon.vm02 (mon.0) 415 : audit [INF] from='client.? 192.168.123.102:0/1075387588' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "6a439063-03a8-4958-811b-6a2933fe0919"}]': finished 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: audit 2026-03-06T22:39:44.684589+0000 mon.vm02 (mon.0) 415 : audit [INF] from='client.? 192.168.123.102:0/1075387588' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "6a439063-03a8-4958-811b-6a2933fe0919"}]': finished 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: cluster 2026-03-06T22:39:44.687388+0000 mon.vm02 (mon.0) 416 : cluster [DBG] osdmap e13: 8 total, 0 up, 8 in 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: cluster 2026-03-06T22:39:44.687388+0000 mon.vm02 (mon.0) 416 : cluster [DBG] osdmap e13: 8 total, 0 up, 8 in 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: audit 2026-03-06T22:39:44.687640+0000 mon.vm02 (mon.0) 417 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: audit 2026-03-06T22:39:44.687640+0000 mon.vm02 (mon.0) 417 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: audit 2026-03-06T22:39:44.687974+0000 mon.vm02 (mon.0) 418 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: audit 2026-03-06T22:39:44.687974+0000 mon.vm02 (mon.0) 418 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: audit 2026-03-06T22:39:44.688215+0000 mon.vm02 (mon.0) 419 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: audit 2026-03-06T22:39:44.688215+0000 mon.vm02 (mon.0) 419 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: audit 2026-03-06T22:39:44.688715+0000 mon.vm02 (mon.0) 420 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: audit 2026-03-06T22:39:44.688715+0000 mon.vm02 (mon.0) 420 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: audit 2026-03-06T22:39:44.688922+0000 mon.vm02 (mon.0) 421 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: audit 2026-03-06T22:39:44.688922+0000 mon.vm02 (mon.0) 421 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: audit 2026-03-06T22:39:44.689241+0000 mon.vm02 (mon.0) 422 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: audit 2026-03-06T22:39:44.689241+0000 mon.vm02 (mon.0) 422 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: audit 2026-03-06T22:39:44.689538+0000 mon.vm02 (mon.0) 423 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:39:44.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:44 vm07 bash[20848]: audit 2026-03-06T22:39:44.689538+0000 mon.vm02 (mon.0) 423 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:39:45.557 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph osd stat -f json 2026-03-06T23:39:45.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:45 vm07 bash[20848]: audit 2026-03-06T22:39:44.689916+0000 mon.vm02 (mon.0) 424 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:39:45.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:45 vm07 bash[20848]: audit 2026-03-06T22:39:44.689916+0000 mon.vm02 (mon.0) 424 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:39:45.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:45 vm07 bash[20848]: audit 2026-03-06T22:39:45.264677+0000 mon.vm07 (mon.1) 9 : audit [DBG] from='client.? 192.168.123.107:0/3021621297' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-06T23:39:45.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:45 vm07 bash[20848]: audit 2026-03-06T22:39:45.264677+0000 mon.vm07 (mon.1) 9 : audit [DBG] from='client.? 192.168.123.107:0/3021621297' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-06T23:39:45.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:45 vm07 bash[20848]: audit 2026-03-06T22:39:45.294618+0000 mon.vm02 (mon.0) 425 : audit [DBG] from='client.? 192.168.123.102:0/110146912' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-06T23:39:45.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:45 vm07 bash[20848]: audit 2026-03-06T22:39:45.294618+0000 mon.vm02 (mon.0) 425 : audit [DBG] from='client.? 192.168.123.102:0/110146912' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-06T23:39:45.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:45 vm02 bash[17013]: audit 2026-03-06T22:39:44.689916+0000 mon.vm02 (mon.0) 424 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:39:45.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:45 vm02 bash[17013]: audit 2026-03-06T22:39:44.689916+0000 mon.vm02 (mon.0) 424 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:39:45.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:45 vm02 bash[17013]: audit 2026-03-06T22:39:45.264677+0000 mon.vm07 (mon.1) 9 : audit [DBG] from='client.? 192.168.123.107:0/3021621297' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-06T23:39:45.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:45 vm02 bash[17013]: audit 2026-03-06T22:39:45.264677+0000 mon.vm07 (mon.1) 9 : audit [DBG] from='client.? 192.168.123.107:0/3021621297' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-06T23:39:45.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:45 vm02 bash[17013]: audit 2026-03-06T22:39:45.294618+0000 mon.vm02 (mon.0) 425 : audit [DBG] from='client.? 192.168.123.102:0/110146912' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-06T23:39:45.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:45 vm02 bash[17013]: audit 2026-03-06T22:39:45.294618+0000 mon.vm02 (mon.0) 425 : audit [DBG] from='client.? 192.168.123.102:0/110146912' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-06T23:39:46.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:46 vm07 bash[20848]: cluster 2026-03-06T22:39:44.750553+0000 mgr.vm02.opvwec (mgr.14199) 74 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:46.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:46 vm07 bash[20848]: cluster 2026-03-06T22:39:44.750553+0000 mgr.vm02.opvwec (mgr.14199) 74 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:46.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:46 vm02 bash[17013]: cluster 2026-03-06T22:39:44.750553+0000 mgr.vm02.opvwec (mgr.14199) 74 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:46.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:46 vm02 bash[17013]: cluster 2026-03-06T22:39:44.750553+0000 mgr.vm02.opvwec (mgr.14199) 74 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:47.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:47 vm07 bash[20848]: cluster 2026-03-06T22:39:46.750710+0000 mgr.vm02.opvwec (mgr.14199) 75 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:47.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:47 vm07 bash[20848]: cluster 2026-03-06T22:39:46.750710+0000 mgr.vm02.opvwec (mgr.14199) 75 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:47.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:47 vm02 bash[17013]: cluster 2026-03-06T22:39:46.750710+0000 mgr.vm02.opvwec (mgr.14199) 75 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:47.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:47 vm02 bash[17013]: cluster 2026-03-06T22:39:46.750710+0000 mgr.vm02.opvwec (mgr.14199) 75 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:50.228 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:49 vm07 bash[20848]: cluster 2026-03-06T22:39:48.750874+0000 mgr.vm02.opvwec (mgr.14199) 76 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:50.229 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:49 vm07 bash[20848]: cluster 2026-03-06T22:39:48.750874+0000 mgr.vm02.opvwec (mgr.14199) 76 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:50.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:49 vm02 bash[17013]: cluster 2026-03-06T22:39:48.750874+0000 mgr.vm02.opvwec (mgr.14199) 76 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:50.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:49 vm02 bash[17013]: cluster 2026-03-06T22:39:48.750874+0000 mgr.vm02.opvwec (mgr.14199) 76 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:50.330 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:39:50.670 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:39:50.731 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":13,"num_osds":8,"num_up_osds":0,"osd_up_since":0,"num_in_osds":8,"osd_in_since":1772836784,"num_remapped_pgs":0} 2026-03-06T23:39:50.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:50 vm02 bash[17013]: audit 2026-03-06T22:39:50.665793+0000 mon.vm02 (mon.0) 426 : audit [DBG] from='client.? 192.168.123.102:0/2223115473' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-06T23:39:50.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:50 vm02 bash[17013]: audit 2026-03-06T22:39:50.665793+0000 mon.vm02 (mon.0) 426 : audit [DBG] from='client.? 192.168.123.102:0/2223115473' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-06T23:39:50.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:50 vm02 bash[17013]: audit 2026-03-06T22:39:50.799485+0000 mon.vm02 (mon.0) 427 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:39:50.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:50 vm02 bash[17013]: audit 2026-03-06T22:39:50.799485+0000 mon.vm02 (mon.0) 427 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:39:51.228 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:50 vm07 bash[20848]: audit 2026-03-06T22:39:50.665793+0000 mon.vm02 (mon.0) 426 : audit [DBG] from='client.? 192.168.123.102:0/2223115473' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-06T23:39:51.228 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:50 vm07 bash[20848]: audit 2026-03-06T22:39:50.665793+0000 mon.vm02 (mon.0) 426 : audit [DBG] from='client.? 192.168.123.102:0/2223115473' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-06T23:39:51.228 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:50 vm07 bash[20848]: audit 2026-03-06T22:39:50.799485+0000 mon.vm02 (mon.0) 427 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:39:51.228 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:50 vm07 bash[20848]: audit 2026-03-06T22:39:50.799485+0000 mon.vm02 (mon.0) 427 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:39:51.732 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph osd stat -f json 2026-03-06T23:39:51.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:51 vm02 bash[17013]: cluster 2026-03-06T22:39:50.751015+0000 mgr.vm02.opvwec (mgr.14199) 77 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:51.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:51 vm02 bash[17013]: cluster 2026-03-06T22:39:50.751015+0000 mgr.vm02.opvwec (mgr.14199) 77 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:52.228 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:51 vm07 bash[20848]: cluster 2026-03-06T22:39:50.751015+0000 mgr.vm02.opvwec (mgr.14199) 77 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:52.229 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:51 vm07 bash[20848]: cluster 2026-03-06T22:39:50.751015+0000 mgr.vm02.opvwec (mgr.14199) 77 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:54.180 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:53 vm07 bash[20848]: cluster 2026-03-06T22:39:52.751191+0000 mgr.vm02.opvwec (mgr.14199) 78 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:54.180 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:53 vm07 bash[20848]: cluster 2026-03-06T22:39:52.751191+0000 mgr.vm02.opvwec (mgr.14199) 78 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:54.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:53 vm02 bash[17013]: cluster 2026-03-06T22:39:52.751191+0000 mgr.vm02.opvwec (mgr.14199) 78 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:54.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:53 vm02 bash[17013]: cluster 2026-03-06T22:39:52.751191+0000 mgr.vm02.opvwec (mgr.14199) 78 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:55.090 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:54 vm02 bash[17013]: audit 2026-03-06T22:39:54.020330+0000 mon.vm02 (mon.0) 428 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-06T23:39:55.090 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:54 vm02 bash[17013]: audit 2026-03-06T22:39:54.020330+0000 mon.vm02 (mon.0) 428 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-06T23:39:55.090 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:54 vm02 bash[17013]: audit 2026-03-06T22:39:54.021042+0000 mon.vm02 (mon.0) 429 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:55.090 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:54 vm02 bash[17013]: audit 2026-03-06T22:39:54.021042+0000 mon.vm02 (mon.0) 429 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:55.090 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:54 vm02 bash[17013]: cephadm 2026-03-06T22:39:54.021570+0000 mgr.vm02.opvwec (mgr.14199) 79 : cephadm [INF] Deploying daemon osd.1 on vm02 2026-03-06T23:39:55.090 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:54 vm02 bash[17013]: cephadm 2026-03-06T22:39:54.021570+0000 mgr.vm02.opvwec (mgr.14199) 79 : cephadm [INF] Deploying daemon osd.1 on vm02 2026-03-06T23:39:55.090 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:54 vm02 bash[17013]: audit 2026-03-06T22:39:54.236580+0000 mon.vm02 (mon.0) 430 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-06T23:39:55.090 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:54 vm02 bash[17013]: audit 2026-03-06T22:39:54.236580+0000 mon.vm02 (mon.0) 430 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-06T23:39:55.090 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:54 vm02 bash[17013]: audit 2026-03-06T22:39:54.237148+0000 mon.vm02 (mon.0) 431 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:55.090 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:54 vm02 bash[17013]: audit 2026-03-06T22:39:54.237148+0000 mon.vm02 (mon.0) 431 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:55.090 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:54 vm02 bash[17013]: cephadm 2026-03-06T22:39:54.237711+0000 mgr.vm02.opvwec (mgr.14199) 80 : cephadm [INF] Deploying daemon osd.0 on vm07 2026-03-06T23:39:55.090 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:54 vm02 bash[17013]: cephadm 2026-03-06T22:39:54.237711+0000 mgr.vm02.opvwec (mgr.14199) 80 : cephadm [INF] Deploying daemon osd.0 on vm07 2026-03-06T23:39:55.173 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:54 vm07 bash[20848]: audit 2026-03-06T22:39:54.020330+0000 mon.vm02 (mon.0) 428 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-06T23:39:55.173 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:54 vm07 bash[20848]: audit 2026-03-06T22:39:54.020330+0000 mon.vm02 (mon.0) 428 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-06T23:39:55.173 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:54 vm07 bash[20848]: audit 2026-03-06T22:39:54.021042+0000 mon.vm02 (mon.0) 429 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:55.173 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:54 vm07 bash[20848]: audit 2026-03-06T22:39:54.021042+0000 mon.vm02 (mon.0) 429 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:55.173 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:54 vm07 bash[20848]: cephadm 2026-03-06T22:39:54.021570+0000 mgr.vm02.opvwec (mgr.14199) 79 : cephadm [INF] Deploying daemon osd.1 on vm02 2026-03-06T23:39:55.173 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:54 vm07 bash[20848]: cephadm 2026-03-06T22:39:54.021570+0000 mgr.vm02.opvwec (mgr.14199) 79 : cephadm [INF] Deploying daemon osd.1 on vm02 2026-03-06T23:39:55.173 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:54 vm07 bash[20848]: audit 2026-03-06T22:39:54.236580+0000 mon.vm02 (mon.0) 430 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-06T23:39:55.173 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:54 vm07 bash[20848]: audit 2026-03-06T22:39:54.236580+0000 mon.vm02 (mon.0) 430 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-06T23:39:55.173 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:54 vm07 bash[20848]: audit 2026-03-06T22:39:54.237148+0000 mon.vm02 (mon.0) 431 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:55.173 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:54 vm07 bash[20848]: audit 2026-03-06T22:39:54.237148+0000 mon.vm02 (mon.0) 431 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:55.173 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:54 vm07 bash[20848]: cephadm 2026-03-06T22:39:54.237711+0000 mgr.vm02.opvwec (mgr.14199) 80 : cephadm [INF] Deploying daemon osd.0 on vm07 2026-03-06T23:39:55.173 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:54 vm07 bash[20848]: cephadm 2026-03-06T22:39:54.237711+0000 mgr.vm02.opvwec (mgr.14199) 80 : cephadm [INF] Deploying daemon osd.0 on vm07 2026-03-06T23:39:55.360 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:55 vm02 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:39:55.360 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:55 vm02 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:39:55.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:55 vm07 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:39:55.910 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:55 vm07 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:39:56.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:56 vm07 bash[20848]: cluster 2026-03-06T22:39:54.751359+0000 mgr.vm02.opvwec (mgr.14199) 81 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:56.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:56 vm07 bash[20848]: cluster 2026-03-06T22:39:54.751359+0000 mgr.vm02.opvwec (mgr.14199) 81 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:56.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:56 vm07 bash[20848]: audit 2026-03-06T22:39:55.387918+0000 mon.vm02 (mon.0) 432 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:56.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:56 vm07 bash[20848]: audit 2026-03-06T22:39:55.387918+0000 mon.vm02 (mon.0) 432 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:56.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:56 vm07 bash[20848]: audit 2026-03-06T22:39:55.392958+0000 mon.vm02 (mon.0) 433 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:56.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:56 vm07 bash[20848]: audit 2026-03-06T22:39:55.392958+0000 mon.vm02 (mon.0) 433 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:56.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:56 vm07 bash[20848]: audit 2026-03-06T22:39:55.394040+0000 mon.vm02 (mon.0) 434 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-06T23:39:56.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:56 vm07 bash[20848]: audit 2026-03-06T22:39:55.394040+0000 mon.vm02 (mon.0) 434 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-06T23:39:56.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:56 vm07 bash[20848]: audit 2026-03-06T22:39:55.394640+0000 mon.vm02 (mon.0) 435 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:56.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:56 vm07 bash[20848]: audit 2026-03-06T22:39:55.394640+0000 mon.vm02 (mon.0) 435 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:56.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:56 vm07 bash[20848]: cephadm 2026-03-06T22:39:55.395104+0000 mgr.vm02.opvwec (mgr.14199) 82 : cephadm [INF] Deploying daemon osd.3 on vm02 2026-03-06T23:39:56.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:56 vm07 bash[20848]: cephadm 2026-03-06T22:39:55.395104+0000 mgr.vm02.opvwec (mgr.14199) 82 : cephadm [INF] Deploying daemon osd.3 on vm02 2026-03-06T23:39:56.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:56 vm07 bash[20848]: audit 2026-03-06T22:39:55.621539+0000 mon.vm02 (mon.0) 436 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:56.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:56 vm07 bash[20848]: audit 2026-03-06T22:39:55.621539+0000 mon.vm02 (mon.0) 436 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:56.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:56 vm07 bash[20848]: audit 2026-03-06T22:39:55.631640+0000 mon.vm02 (mon.0) 437 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:56.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:56 vm07 bash[20848]: audit 2026-03-06T22:39:55.631640+0000 mon.vm02 (mon.0) 437 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:56.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:56 vm07 bash[20848]: audit 2026-03-06T22:39:55.634005+0000 mon.vm02 (mon.0) 438 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-06T23:39:56.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:56 vm07 bash[20848]: audit 2026-03-06T22:39:55.634005+0000 mon.vm02 (mon.0) 438 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-06T23:39:56.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:56 vm07 bash[20848]: audit 2026-03-06T22:39:55.634573+0000 mon.vm02 (mon.0) 439 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:56.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:56 vm07 bash[20848]: audit 2026-03-06T22:39:55.634573+0000 mon.vm02 (mon.0) 439 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:56.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:56 vm07 bash[20848]: cephadm 2026-03-06T22:39:55.635044+0000 mgr.vm02.opvwec (mgr.14199) 83 : cephadm [INF] Deploying daemon osd.2 on vm07 2026-03-06T23:39:56.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:56 vm07 bash[20848]: cephadm 2026-03-06T22:39:55.635044+0000 mgr.vm02.opvwec (mgr.14199) 83 : cephadm [INF] Deploying daemon osd.2 on vm07 2026-03-06T23:39:56.732 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:56 vm02 bash[17013]: cluster 2026-03-06T22:39:54.751359+0000 mgr.vm02.opvwec (mgr.14199) 81 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:56.732 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:56 vm02 bash[17013]: cluster 2026-03-06T22:39:54.751359+0000 mgr.vm02.opvwec (mgr.14199) 81 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:56.732 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:56 vm02 bash[17013]: audit 2026-03-06T22:39:55.387918+0000 mon.vm02 (mon.0) 432 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:56.732 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:56 vm02 bash[17013]: audit 2026-03-06T22:39:55.387918+0000 mon.vm02 (mon.0) 432 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:56.732 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:56 vm02 bash[17013]: audit 2026-03-06T22:39:55.392958+0000 mon.vm02 (mon.0) 433 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:56.732 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:56 vm02 bash[17013]: audit 2026-03-06T22:39:55.392958+0000 mon.vm02 (mon.0) 433 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:56.732 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:56 vm02 bash[17013]: audit 2026-03-06T22:39:55.394040+0000 mon.vm02 (mon.0) 434 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-06T23:39:56.732 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:56 vm02 bash[17013]: audit 2026-03-06T22:39:55.394040+0000 mon.vm02 (mon.0) 434 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-06T23:39:56.732 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:56 vm02 bash[17013]: audit 2026-03-06T22:39:55.394640+0000 mon.vm02 (mon.0) 435 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:56.732 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:56 vm02 bash[17013]: audit 2026-03-06T22:39:55.394640+0000 mon.vm02 (mon.0) 435 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:56.732 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:56 vm02 bash[17013]: cephadm 2026-03-06T22:39:55.395104+0000 mgr.vm02.opvwec (mgr.14199) 82 : cephadm [INF] Deploying daemon osd.3 on vm02 2026-03-06T23:39:56.732 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:56 vm02 bash[17013]: cephadm 2026-03-06T22:39:55.395104+0000 mgr.vm02.opvwec (mgr.14199) 82 : cephadm [INF] Deploying daemon osd.3 on vm02 2026-03-06T23:39:56.732 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:56 vm02 bash[17013]: audit 2026-03-06T22:39:55.621539+0000 mon.vm02 (mon.0) 436 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:56.732 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:56 vm02 bash[17013]: audit 2026-03-06T22:39:55.621539+0000 mon.vm02 (mon.0) 436 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:56.732 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:56 vm02 bash[17013]: audit 2026-03-06T22:39:55.631640+0000 mon.vm02 (mon.0) 437 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:56.732 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:56 vm02 bash[17013]: audit 2026-03-06T22:39:55.631640+0000 mon.vm02 (mon.0) 437 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:56.732 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:56 vm02 bash[17013]: audit 2026-03-06T22:39:55.634005+0000 mon.vm02 (mon.0) 438 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-06T23:39:56.732 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:56 vm02 bash[17013]: audit 2026-03-06T22:39:55.634005+0000 mon.vm02 (mon.0) 438 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-06T23:39:56.732 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:56 vm02 bash[17013]: audit 2026-03-06T22:39:55.634573+0000 mon.vm02 (mon.0) 439 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:56.732 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:56 vm02 bash[17013]: audit 2026-03-06T22:39:55.634573+0000 mon.vm02 (mon.0) 439 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:56.732 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:56 vm02 bash[17013]: cephadm 2026-03-06T22:39:55.635044+0000 mgr.vm02.opvwec (mgr.14199) 83 : cephadm [INF] Deploying daemon osd.2 on vm07 2026-03-06T23:39:56.732 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:56 vm02 bash[17013]: cephadm 2026-03-06T22:39:55.635044+0000 mgr.vm02.opvwec (mgr.14199) 83 : cephadm [INF] Deploying daemon osd.2 on vm07 2026-03-06T23:39:56.982 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:56 vm02 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:39:57.212 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:56 vm07 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:39:57.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:56 vm02 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:39:57.386 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:39:57.471 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:57 vm07 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:39:58.193 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:39:58.213 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:58 vm02 bash[17013]: cluster 2026-03-06T22:39:56.751541+0000 mgr.vm02.opvwec (mgr.14199) 84 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:58.214 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:58 vm02 bash[17013]: cluster 2026-03-06T22:39:56.751541+0000 mgr.vm02.opvwec (mgr.14199) 84 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:58.214 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:58 vm02 bash[17013]: audit 2026-03-06T22:39:57.105619+0000 mon.vm02 (mon.0) 440 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:58.214 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:58 vm02 bash[17013]: audit 2026-03-06T22:39:57.105619+0000 mon.vm02 (mon.0) 440 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:58.214 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:58 vm02 bash[17013]: audit 2026-03-06T22:39:57.111461+0000 mon.vm02 (mon.0) 441 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:58.214 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:58 vm02 bash[17013]: audit 2026-03-06T22:39:57.111461+0000 mon.vm02 (mon.0) 441 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:58.214 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:58 vm02 bash[17013]: audit 2026-03-06T22:39:57.112694+0000 mon.vm02 (mon.0) 442 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-06T23:39:58.214 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:58 vm02 bash[17013]: audit 2026-03-06T22:39:57.112694+0000 mon.vm02 (mon.0) 442 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-06T23:39:58.214 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:58 vm02 bash[17013]: audit 2026-03-06T22:39:57.114584+0000 mon.vm02 (mon.0) 443 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:58.214 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:58 vm02 bash[17013]: audit 2026-03-06T22:39:57.114584+0000 mon.vm02 (mon.0) 443 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:58.214 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:58 vm02 bash[17013]: cephadm 2026-03-06T22:39:57.117550+0000 mgr.vm02.opvwec (mgr.14199) 85 : cephadm [INF] Deploying daemon osd.4 on vm02 2026-03-06T23:39:58.214 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:58 vm02 bash[17013]: cephadm 2026-03-06T22:39:57.117550+0000 mgr.vm02.opvwec (mgr.14199) 85 : cephadm [INF] Deploying daemon osd.4 on vm02 2026-03-06T23:39:58.214 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:58 vm02 bash[17013]: audit 2026-03-06T22:39:57.369859+0000 mon.vm02 (mon.0) 444 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:58.214 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:58 vm02 bash[17013]: audit 2026-03-06T22:39:57.369859+0000 mon.vm02 (mon.0) 444 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:58.214 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:58 vm02 bash[17013]: audit 2026-03-06T22:39:57.391471+0000 mon.vm02 (mon.0) 445 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:58.214 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:58 vm02 bash[17013]: audit 2026-03-06T22:39:57.391471+0000 mon.vm02 (mon.0) 445 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:58.214 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:58 vm02 bash[17013]: audit 2026-03-06T22:39:57.398223+0000 mon.vm02 (mon.0) 446 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-06T23:39:58.214 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:58 vm02 bash[17013]: audit 2026-03-06T22:39:57.398223+0000 mon.vm02 (mon.0) 446 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-06T23:39:58.214 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:58 vm02 bash[17013]: audit 2026-03-06T22:39:57.402665+0000 mon.vm02 (mon.0) 447 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:58.214 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:58 vm02 bash[17013]: audit 2026-03-06T22:39:57.402665+0000 mon.vm02 (mon.0) 447 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:58.214 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:58 vm02 bash[17013]: cephadm 2026-03-06T22:39:57.404467+0000 mgr.vm02.opvwec (mgr.14199) 86 : cephadm [INF] Deploying daemon osd.5 on vm07 2026-03-06T23:39:58.214 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:58 vm02 bash[17013]: cephadm 2026-03-06T22:39:57.404467+0000 mgr.vm02.opvwec (mgr.14199) 86 : cephadm [INF] Deploying daemon osd.5 on vm07 2026-03-06T23:39:58.322 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":13,"num_osds":8,"num_up_osds":0,"osd_up_since":0,"num_in_osds":8,"osd_in_since":1772836784,"num_remapped_pgs":0} 2026-03-06T23:39:58.465 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:58 vm07 bash[20848]: cluster 2026-03-06T22:39:56.751541+0000 mgr.vm02.opvwec (mgr.14199) 84 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:58.466 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:58 vm07 bash[20848]: cluster 2026-03-06T22:39:56.751541+0000 mgr.vm02.opvwec (mgr.14199) 84 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:39:58.466 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:58 vm07 bash[20848]: audit 2026-03-06T22:39:57.105619+0000 mon.vm02 (mon.0) 440 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:58.466 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:58 vm07 bash[20848]: audit 2026-03-06T22:39:57.105619+0000 mon.vm02 (mon.0) 440 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:58.466 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:58 vm07 bash[20848]: audit 2026-03-06T22:39:57.111461+0000 mon.vm02 (mon.0) 441 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:58.466 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:58 vm07 bash[20848]: audit 2026-03-06T22:39:57.111461+0000 mon.vm02 (mon.0) 441 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:58.466 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:58 vm07 bash[20848]: audit 2026-03-06T22:39:57.112694+0000 mon.vm02 (mon.0) 442 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-06T23:39:58.466 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:58 vm07 bash[20848]: audit 2026-03-06T22:39:57.112694+0000 mon.vm02 (mon.0) 442 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-06T23:39:58.466 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:58 vm07 bash[20848]: audit 2026-03-06T22:39:57.114584+0000 mon.vm02 (mon.0) 443 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:58.466 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:58 vm07 bash[20848]: audit 2026-03-06T22:39:57.114584+0000 mon.vm02 (mon.0) 443 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:58.466 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:58 vm07 bash[20848]: cephadm 2026-03-06T22:39:57.117550+0000 mgr.vm02.opvwec (mgr.14199) 85 : cephadm [INF] Deploying daemon osd.4 on vm02 2026-03-06T23:39:58.466 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:58 vm07 bash[20848]: cephadm 2026-03-06T22:39:57.117550+0000 mgr.vm02.opvwec (mgr.14199) 85 : cephadm [INF] Deploying daemon osd.4 on vm02 2026-03-06T23:39:58.466 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:58 vm07 bash[20848]: audit 2026-03-06T22:39:57.369859+0000 mon.vm02 (mon.0) 444 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:58.466 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:58 vm07 bash[20848]: audit 2026-03-06T22:39:57.369859+0000 mon.vm02 (mon.0) 444 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:58.466 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:58 vm07 bash[20848]: audit 2026-03-06T22:39:57.391471+0000 mon.vm02 (mon.0) 445 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:58.466 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:58 vm07 bash[20848]: audit 2026-03-06T22:39:57.391471+0000 mon.vm02 (mon.0) 445 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:39:58.466 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:58 vm07 bash[20848]: audit 2026-03-06T22:39:57.398223+0000 mon.vm02 (mon.0) 446 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-06T23:39:58.466 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:58 vm07 bash[20848]: audit 2026-03-06T22:39:57.398223+0000 mon.vm02 (mon.0) 446 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-06T23:39:58.466 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:58 vm07 bash[20848]: audit 2026-03-06T22:39:57.402665+0000 mon.vm02 (mon.0) 447 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:58.466 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:58 vm07 bash[20848]: audit 2026-03-06T22:39:57.402665+0000 mon.vm02 (mon.0) 447 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:39:58.466 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:58 vm07 bash[20848]: cephadm 2026-03-06T22:39:57.404467+0000 mgr.vm02.opvwec (mgr.14199) 86 : cephadm [INF] Deploying daemon osd.5 on vm07 2026-03-06T23:39:58.466 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:58 vm07 bash[20848]: cephadm 2026-03-06T22:39:57.404467+0000 mgr.vm02.opvwec (mgr.14199) 86 : cephadm [INF] Deploying daemon osd.5 on vm07 2026-03-06T23:39:58.989 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:58 vm07 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:39:59.127 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:58 vm02 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:39:59.296 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:59 vm07 bash[20848]: audit 2026-03-06T22:39:58.191364+0000 mon.vm07 (mon.1) 10 : audit [DBG] from='client.? 192.168.123.102:0/3084491620' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-06T23:39:59.296 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:59 vm07 bash[20848]: audit 2026-03-06T22:39:58.191364+0000 mon.vm07 (mon.1) 10 : audit [DBG] from='client.? 192.168.123.102:0/3084491620' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-06T23:39:59.296 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:39:59 vm07 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:39:59.323 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph osd stat -f json 2026-03-06T23:39:59.398 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:59 vm02 bash[17013]: audit 2026-03-06T22:39:58.191364+0000 mon.vm07 (mon.1) 10 : audit [DBG] from='client.? 192.168.123.102:0/3084491620' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-06T23:39:59.398 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:59 vm02 bash[17013]: audit 2026-03-06T22:39:58.191364+0000 mon.vm07 (mon.1) 10 : audit [DBG] from='client.? 192.168.123.102:0/3084491620' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-06T23:39:59.398 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:39:59 vm02 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:40:00.426 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:00 vm07 bash[20848]: cluster 2026-03-06T22:39:58.751779+0000 mgr.vm02.opvwec (mgr.14199) 87 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:40:00.426 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:00 vm07 bash[20848]: cluster 2026-03-06T22:39:58.751779+0000 mgr.vm02.opvwec (mgr.14199) 87 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:40:00.426 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:00 vm07 bash[20848]: audit 2026-03-06T22:39:59.292933+0000 mon.vm02 (mon.0) 448 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:00.426 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:00 vm07 bash[20848]: audit 2026-03-06T22:39:59.292933+0000 mon.vm02 (mon.0) 448 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:00.426 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:00 vm07 bash[20848]: audit 2026-03-06T22:39:59.308685+0000 mon.vm02 (mon.0) 449 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:00.426 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:00 vm07 bash[20848]: audit 2026-03-06T22:39:59.308685+0000 mon.vm02 (mon.0) 449 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:00.426 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:00 vm07 bash[20848]: audit 2026-03-06T22:39:59.311134+0000 mon.vm02 (mon.0) 450 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-06T23:40:00.426 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:00 vm07 bash[20848]: audit 2026-03-06T22:39:59.311134+0000 mon.vm02 (mon.0) 450 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-06T23:40:00.426 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:00 vm07 bash[20848]: audit 2026-03-06T22:39:59.312058+0000 mon.vm02 (mon.0) 451 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:40:00.426 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:00 vm07 bash[20848]: audit 2026-03-06T22:39:59.312058+0000 mon.vm02 (mon.0) 451 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:40:00.427 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:00 vm07 bash[20848]: cephadm 2026-03-06T22:39:59.312587+0000 mgr.vm02.opvwec (mgr.14199) 88 : cephadm [INF] Deploying daemon osd.7 on vm02 2026-03-06T23:40:00.427 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:00 vm07 bash[20848]: cephadm 2026-03-06T22:39:59.312587+0000 mgr.vm02.opvwec (mgr.14199) 88 : cephadm [INF] Deploying daemon osd.7 on vm02 2026-03-06T23:40:00.427 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:00 vm07 bash[20848]: audit 2026-03-06T22:39:59.349157+0000 mon.vm02 (mon.0) 452 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:00.427 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:00 vm07 bash[20848]: audit 2026-03-06T22:39:59.349157+0000 mon.vm02 (mon.0) 452 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:00.427 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:00 vm07 bash[20848]: audit 2026-03-06T22:39:59.367076+0000 mon.vm02 (mon.0) 453 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:00.427 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:00 vm07 bash[20848]: audit 2026-03-06T22:39:59.367076+0000 mon.vm02 (mon.0) 453 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:00.427 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:00 vm07 bash[20848]: audit 2026-03-06T22:39:59.377318+0000 mon.vm02 (mon.0) 454 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-06T23:40:00.427 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:00 vm07 bash[20848]: audit 2026-03-06T22:39:59.377318+0000 mon.vm02 (mon.0) 454 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-06T23:40:00.427 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:00 vm07 bash[20848]: audit 2026-03-06T22:39:59.379157+0000 mon.vm02 (mon.0) 455 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:40:00.427 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:00 vm07 bash[20848]: audit 2026-03-06T22:39:59.379157+0000 mon.vm02 (mon.0) 455 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:40:00.427 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:00 vm07 bash[20848]: cephadm 2026-03-06T22:39:59.383609+0000 mgr.vm02.opvwec (mgr.14199) 89 : cephadm [INF] Deploying daemon osd.6 on vm07 2026-03-06T23:40:00.427 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:00 vm07 bash[20848]: cephadm 2026-03-06T22:39:59.383609+0000 mgr.vm02.opvwec (mgr.14199) 89 : cephadm [INF] Deploying daemon osd.6 on vm07 2026-03-06T23:40:00.427 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:00 vm07 bash[20848]: cluster 2026-03-06T22:40:00.000101+0000 mon.vm02 (mon.0) 456 : cluster [INF] overall HEALTH_OK 2026-03-06T23:40:00.427 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:00 vm07 bash[20848]: cluster 2026-03-06T22:40:00.000101+0000 mon.vm02 (mon.0) 456 : cluster [INF] overall HEALTH_OK 2026-03-06T23:40:00.646 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:00 vm02 bash[17013]: cluster 2026-03-06T22:39:58.751779+0000 mgr.vm02.opvwec (mgr.14199) 87 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:40:00.646 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:00 vm02 bash[17013]: cluster 2026-03-06T22:39:58.751779+0000 mgr.vm02.opvwec (mgr.14199) 87 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:40:00.646 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:00 vm02 bash[17013]: audit 2026-03-06T22:39:59.292933+0000 mon.vm02 (mon.0) 448 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:00.646 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:00 vm02 bash[17013]: audit 2026-03-06T22:39:59.292933+0000 mon.vm02 (mon.0) 448 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:00.646 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:00 vm02 bash[17013]: audit 2026-03-06T22:39:59.308685+0000 mon.vm02 (mon.0) 449 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:00.646 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:00 vm02 bash[17013]: audit 2026-03-06T22:39:59.308685+0000 mon.vm02 (mon.0) 449 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:00.646 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:00 vm02 bash[17013]: audit 2026-03-06T22:39:59.311134+0000 mon.vm02 (mon.0) 450 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-06T23:40:00.646 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:00 vm02 bash[17013]: audit 2026-03-06T22:39:59.311134+0000 mon.vm02 (mon.0) 450 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-06T23:40:00.646 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:00 vm02 bash[17013]: audit 2026-03-06T22:39:59.312058+0000 mon.vm02 (mon.0) 451 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:40:00.646 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:00 vm02 bash[17013]: audit 2026-03-06T22:39:59.312058+0000 mon.vm02 (mon.0) 451 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:40:00.646 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:00 vm02 bash[17013]: cephadm 2026-03-06T22:39:59.312587+0000 mgr.vm02.opvwec (mgr.14199) 88 : cephadm [INF] Deploying daemon osd.7 on vm02 2026-03-06T23:40:00.646 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:00 vm02 bash[17013]: cephadm 2026-03-06T22:39:59.312587+0000 mgr.vm02.opvwec (mgr.14199) 88 : cephadm [INF] Deploying daemon osd.7 on vm02 2026-03-06T23:40:00.646 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:00 vm02 bash[17013]: audit 2026-03-06T22:39:59.349157+0000 mon.vm02 (mon.0) 452 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:00.646 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:00 vm02 bash[17013]: audit 2026-03-06T22:39:59.349157+0000 mon.vm02 (mon.0) 452 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:00.646 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:00 vm02 bash[17013]: audit 2026-03-06T22:39:59.367076+0000 mon.vm02 (mon.0) 453 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:00.646 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:00 vm02 bash[17013]: audit 2026-03-06T22:39:59.367076+0000 mon.vm02 (mon.0) 453 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:00.646 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:00 vm02 bash[17013]: audit 2026-03-06T22:39:59.377318+0000 mon.vm02 (mon.0) 454 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-06T23:40:00.646 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:00 vm02 bash[17013]: audit 2026-03-06T22:39:59.377318+0000 mon.vm02 (mon.0) 454 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-06T23:40:00.646 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:00 vm02 bash[17013]: audit 2026-03-06T22:39:59.379157+0000 mon.vm02 (mon.0) 455 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:40:00.646 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:00 vm02 bash[17013]: audit 2026-03-06T22:39:59.379157+0000 mon.vm02 (mon.0) 455 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:40:00.646 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:00 vm02 bash[17013]: cephadm 2026-03-06T22:39:59.383609+0000 mgr.vm02.opvwec (mgr.14199) 89 : cephadm [INF] Deploying daemon osd.6 on vm07 2026-03-06T23:40:00.646 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:00 vm02 bash[17013]: cephadm 2026-03-06T22:39:59.383609+0000 mgr.vm02.opvwec (mgr.14199) 89 : cephadm [INF] Deploying daemon osd.6 on vm07 2026-03-06T23:40:00.646 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:00 vm02 bash[17013]: cluster 2026-03-06T22:40:00.000101+0000 mon.vm02 (mon.0) 456 : cluster [INF] overall HEALTH_OK 2026-03-06T23:40:00.646 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:00 vm02 bash[17013]: cluster 2026-03-06T22:40:00.000101+0000 mon.vm02 (mon.0) 456 : cluster [INF] overall HEALTH_OK 2026-03-06T23:40:00.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:00 vm07 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:40:01.292 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:01 vm07 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:40:01.463 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:01 vm02 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:40:01.463 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:01 vm02 bash[17013]: audit 2026-03-06T22:40:00.853646+0000 mon.vm02 (mon.0) 457 : audit [INF] from='osd.1 [v2:192.168.123.102:6802/1450039054,v1:192.168.123.102:6803/1450039054]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-06T23:40:01.463 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:01 vm02 bash[17013]: audit 2026-03-06T22:40:00.853646+0000 mon.vm02 (mon.0) 457 : audit [INF] from='osd.1 [v2:192.168.123.102:6802/1450039054,v1:192.168.123.102:6803/1450039054]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-06T23:40:01.558 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:01 vm07 bash[20848]: audit 2026-03-06T22:40:00.853646+0000 mon.vm02 (mon.0) 457 : audit [INF] from='osd.1 [v2:192.168.123.102:6802/1450039054,v1:192.168.123.102:6803/1450039054]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-06T23:40:01.558 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:01 vm07 bash[20848]: audit 2026-03-06T22:40:00.853646+0000 mon.vm02 (mon.0) 457 : audit [INF] from='osd.1 [v2:192.168.123.102:6802/1450039054,v1:192.168.123.102:6803/1450039054]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-06T23:40:01.747 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:01 vm02 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:40:02.620 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:02 vm02 bash[17013]: cluster 2026-03-06T22:40:00.751991+0000 mgr.vm02.opvwec (mgr.14199) 90 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:40:02.620 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:02 vm02 bash[17013]: cluster 2026-03-06T22:40:00.751991+0000 mgr.vm02.opvwec (mgr.14199) 90 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:40:02.620 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:02 vm02 bash[17013]: audit 2026-03-06T22:40:01.340556+0000 mon.vm02 (mon.0) 458 : audit [INF] from='osd.1 [v2:192.168.123.102:6802/1450039054,v1:192.168.123.102:6803/1450039054]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-06T23:40:02.621 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:02 vm02 bash[17013]: audit 2026-03-06T22:40:01.340556+0000 mon.vm02 (mon.0) 458 : audit [INF] from='osd.1 [v2:192.168.123.102:6802/1450039054,v1:192.168.123.102:6803/1450039054]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-06T23:40:02.621 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:02 vm02 bash[17013]: cluster 2026-03-06T22:40:01.343275+0000 mon.vm02 (mon.0) 459 : cluster [DBG] osdmap e14: 8 total, 0 up, 8 in 2026-03-06T23:40:02.621 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:02 vm02 bash[17013]: cluster 2026-03-06T22:40:01.343275+0000 mon.vm02 (mon.0) 459 : cluster [DBG] osdmap e14: 8 total, 0 up, 8 in 2026-03-06T23:40:02.621 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:02 vm02 bash[17013]: audit 2026-03-06T22:40:01.343443+0000 mon.vm02 (mon.0) 460 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:40:02.621 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:02 vm02 bash[17013]: audit 2026-03-06T22:40:01.343443+0000 mon.vm02 (mon.0) 460 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:40:02.621 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:02 vm02 bash[17013]: audit 2026-03-06T22:40:01.343506+0000 mon.vm02 (mon.0) 461 : audit [INF] from='osd.1 [v2:192.168.123.102:6802/1450039054,v1:192.168.123.102:6803/1450039054]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-06T23:40:02.621 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:02 vm02 bash[17013]: audit 2026-03-06T22:40:01.343506+0000 mon.vm02 (mon.0) 461 : audit [INF] from='osd.1 [v2:192.168.123.102:6802/1450039054,v1:192.168.123.102:6803/1450039054]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-06T23:40:02.621 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:02 vm02 bash[17013]: audit 2026-03-06T22:40:01.343577+0000 mon.vm02 (mon.0) 462 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:40:02.621 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:02 vm02 bash[17013]: audit 2026-03-06T22:40:01.343577+0000 mon.vm02 (mon.0) 462 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:40:02.621 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:02 vm02 bash[17013]: audit 2026-03-06T22:40:01.343611+0000 mon.vm02 (mon.0) 463 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:40:02.621 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:02 vm02 bash[17013]: audit 2026-03-06T22:40:01.343611+0000 mon.vm02 (mon.0) 463 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:40:02.621 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:02 vm02 bash[17013]: audit 2026-03-06T22:40:01.343643+0000 mon.vm02 (mon.0) 464 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:40:02.621 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:02 vm02 bash[17013]: audit 2026-03-06T22:40:01.343643+0000 mon.vm02 (mon.0) 464 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:40:02.621 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:02 vm02 bash[17013]: audit 2026-03-06T22:40:01.343672+0000 mon.vm02 (mon.0) 465 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:02.621 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:02 vm02 bash[17013]: audit 2026-03-06T22:40:01.343672+0000 mon.vm02 (mon.0) 465 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:02.621 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:02 vm02 bash[17013]: audit 2026-03-06T22:40:01.343699+0000 mon.vm02 (mon.0) 466 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:40:02.621 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:02 vm02 bash[17013]: audit 2026-03-06T22:40:01.343699+0000 mon.vm02 (mon.0) 466 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:40:02.621 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:02 vm02 bash[17013]: audit 2026-03-06T22:40:01.343729+0000 mon.vm02 (mon.0) 467 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:40:02.621 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:02 vm02 bash[17013]: audit 2026-03-06T22:40:01.343729+0000 mon.vm02 (mon.0) 467 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:40:02.621 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:02 vm02 bash[17013]: audit 2026-03-06T22:40:01.343757+0000 mon.vm02 (mon.0) 468 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:02.621 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:02 vm02 bash[17013]: audit 2026-03-06T22:40:01.343757+0000 mon.vm02 (mon.0) 468 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:02.621 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:02 vm02 bash[17013]: audit 2026-03-06T22:40:01.348679+0000 mon.vm02 (mon.0) 469 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:02.621 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:02 vm02 bash[17013]: audit 2026-03-06T22:40:01.348679+0000 mon.vm02 (mon.0) 469 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:02.621 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:02 vm02 bash[17013]: audit 2026-03-06T22:40:01.349668+0000 mon.vm07 (mon.1) 11 : audit [INF] from='osd.0 [v2:192.168.123.107:6800/1132330073,v1:192.168.123.107:6801/1132330073]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-06T23:40:02.621 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:02 vm02 bash[17013]: audit 2026-03-06T22:40:01.349668+0000 mon.vm07 (mon.1) 11 : audit [INF] from='osd.0 [v2:192.168.123.107:6800/1132330073,v1:192.168.123.107:6801/1132330073]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-06T23:40:02.621 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:02 vm02 bash[17013]: audit 2026-03-06T22:40:01.350280+0000 mon.vm02 (mon.0) 470 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-06T23:40:02.621 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:02 vm02 bash[17013]: audit 2026-03-06T22:40:01.350280+0000 mon.vm02 (mon.0) 470 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-06T23:40:02.621 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:02 vm02 bash[17013]: audit 2026-03-06T22:40:01.356013+0000 mon.vm02 (mon.0) 471 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:02.621 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:02 vm02 bash[17013]: audit 2026-03-06T22:40:01.356013+0000 mon.vm02 (mon.0) 471 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:02.621 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:02 vm02 bash[17013]: audit 2026-03-06T22:40:01.624433+0000 mon.vm02 (mon.0) 472 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:02.621 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:02 vm02 bash[17013]: audit 2026-03-06T22:40:01.624433+0000 mon.vm02 (mon.0) 472 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:02.621 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:02 vm02 bash[17013]: audit 2026-03-06T22:40:01.635121+0000 mon.vm02 (mon.0) 473 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:02.621 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:02 vm02 bash[17013]: audit 2026-03-06T22:40:01.635121+0000 mon.vm02 (mon.0) 473 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:02.702 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:02 vm07 bash[20848]: cluster 2026-03-06T22:40:00.751991+0000 mgr.vm02.opvwec (mgr.14199) 90 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:40:02.702 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:02 vm07 bash[20848]: cluster 2026-03-06T22:40:00.751991+0000 mgr.vm02.opvwec (mgr.14199) 90 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:40:02.702 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:02 vm07 bash[20848]: audit 2026-03-06T22:40:01.340556+0000 mon.vm02 (mon.0) 458 : audit [INF] from='osd.1 [v2:192.168.123.102:6802/1450039054,v1:192.168.123.102:6803/1450039054]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-06T23:40:02.702 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:02 vm07 bash[20848]: audit 2026-03-06T22:40:01.340556+0000 mon.vm02 (mon.0) 458 : audit [INF] from='osd.1 [v2:192.168.123.102:6802/1450039054,v1:192.168.123.102:6803/1450039054]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-06T23:40:02.702 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:02 vm07 bash[20848]: cluster 2026-03-06T22:40:01.343275+0000 mon.vm02 (mon.0) 459 : cluster [DBG] osdmap e14: 8 total, 0 up, 8 in 2026-03-06T23:40:02.702 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:02 vm07 bash[20848]: cluster 2026-03-06T22:40:01.343275+0000 mon.vm02 (mon.0) 459 : cluster [DBG] osdmap e14: 8 total, 0 up, 8 in 2026-03-06T23:40:02.702 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:02 vm07 bash[20848]: audit 2026-03-06T22:40:01.343443+0000 mon.vm02 (mon.0) 460 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:40:02.702 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:02 vm07 bash[20848]: audit 2026-03-06T22:40:01.343443+0000 mon.vm02 (mon.0) 460 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:40:02.702 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:02 vm07 bash[20848]: audit 2026-03-06T22:40:01.343506+0000 mon.vm02 (mon.0) 461 : audit [INF] from='osd.1 [v2:192.168.123.102:6802/1450039054,v1:192.168.123.102:6803/1450039054]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-06T23:40:02.702 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:02 vm07 bash[20848]: audit 2026-03-06T22:40:01.343506+0000 mon.vm02 (mon.0) 461 : audit [INF] from='osd.1 [v2:192.168.123.102:6802/1450039054,v1:192.168.123.102:6803/1450039054]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-06T23:40:02.702 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:02 vm07 bash[20848]: audit 2026-03-06T22:40:01.343577+0000 mon.vm02 (mon.0) 462 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:40:02.702 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:02 vm07 bash[20848]: audit 2026-03-06T22:40:01.343577+0000 mon.vm02 (mon.0) 462 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:40:02.702 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:02 vm07 bash[20848]: audit 2026-03-06T22:40:01.343611+0000 mon.vm02 (mon.0) 463 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:40:02.702 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:02 vm07 bash[20848]: audit 2026-03-06T22:40:01.343611+0000 mon.vm02 (mon.0) 463 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:40:02.703 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:02 vm07 bash[20848]: audit 2026-03-06T22:40:01.343643+0000 mon.vm02 (mon.0) 464 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:40:02.703 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:02 vm07 bash[20848]: audit 2026-03-06T22:40:01.343643+0000 mon.vm02 (mon.0) 464 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:40:02.703 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:02 vm07 bash[20848]: audit 2026-03-06T22:40:01.343672+0000 mon.vm02 (mon.0) 465 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:02.703 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:02 vm07 bash[20848]: audit 2026-03-06T22:40:01.343672+0000 mon.vm02 (mon.0) 465 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:02.703 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:02 vm07 bash[20848]: audit 2026-03-06T22:40:01.343699+0000 mon.vm02 (mon.0) 466 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:40:02.703 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:02 vm07 bash[20848]: audit 2026-03-06T22:40:01.343699+0000 mon.vm02 (mon.0) 466 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:40:02.703 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:02 vm07 bash[20848]: audit 2026-03-06T22:40:01.343729+0000 mon.vm02 (mon.0) 467 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:40:02.703 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:02 vm07 bash[20848]: audit 2026-03-06T22:40:01.343729+0000 mon.vm02 (mon.0) 467 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:40:02.703 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:02 vm07 bash[20848]: audit 2026-03-06T22:40:01.343757+0000 mon.vm02 (mon.0) 468 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:02.703 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:02 vm07 bash[20848]: audit 2026-03-06T22:40:01.343757+0000 mon.vm02 (mon.0) 468 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:02.703 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:02 vm07 bash[20848]: audit 2026-03-06T22:40:01.348679+0000 mon.vm02 (mon.0) 469 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:02.703 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:02 vm07 bash[20848]: audit 2026-03-06T22:40:01.348679+0000 mon.vm02 (mon.0) 469 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:02.703 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:02 vm07 bash[20848]: audit 2026-03-06T22:40:01.349668+0000 mon.vm07 (mon.1) 11 : audit [INF] from='osd.0 [v2:192.168.123.107:6800/1132330073,v1:192.168.123.107:6801/1132330073]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-06T23:40:02.703 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:02 vm07 bash[20848]: audit 2026-03-06T22:40:01.349668+0000 mon.vm07 (mon.1) 11 : audit [INF] from='osd.0 [v2:192.168.123.107:6800/1132330073,v1:192.168.123.107:6801/1132330073]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-06T23:40:02.703 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:02 vm07 bash[20848]: audit 2026-03-06T22:40:01.350280+0000 mon.vm02 (mon.0) 470 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-06T23:40:02.703 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:02 vm07 bash[20848]: audit 2026-03-06T22:40:01.350280+0000 mon.vm02 (mon.0) 470 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-06T23:40:02.703 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:02 vm07 bash[20848]: audit 2026-03-06T22:40:01.356013+0000 mon.vm02 (mon.0) 471 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:02.703 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:02 vm07 bash[20848]: audit 2026-03-06T22:40:01.356013+0000 mon.vm02 (mon.0) 471 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:02.703 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:02 vm07 bash[20848]: audit 2026-03-06T22:40:01.624433+0000 mon.vm02 (mon.0) 472 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:02.703 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:02 vm07 bash[20848]: audit 2026-03-06T22:40:01.624433+0000 mon.vm02 (mon.0) 472 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:02.703 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:02 vm07 bash[20848]: audit 2026-03-06T22:40:01.635121+0000 mon.vm02 (mon.0) 473 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:02.703 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:02 vm07 bash[20848]: audit 2026-03-06T22:40:01.635121+0000 mon.vm02 (mon.0) 473 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:03.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:02.345313+0000 mon.vm02 (mon.0) 474 : audit [INF] from='osd.1 [v2:192.168.123.102:6802/1450039054,v1:192.168.123.102:6803/1450039054]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-06T23:40:03.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:02.345313+0000 mon.vm02 (mon.0) 474 : audit [INF] from='osd.1 [v2:192.168.123.102:6802/1450039054,v1:192.168.123.102:6803/1450039054]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-06T23:40:03.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:02.345425+0000 mon.vm02 (mon.0) 475 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-06T23:40:03.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:02.345425+0000 mon.vm02 (mon.0) 475 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-06T23:40:03.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: cluster 2026-03-06T22:40:02.351840+0000 mon.vm02 (mon.0) 476 : cluster [DBG] osdmap e15: 8 total, 0 up, 8 in 2026-03-06T23:40:03.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: cluster 2026-03-06T22:40:02.351840+0000 mon.vm02 (mon.0) 476 : cluster [DBG] osdmap e15: 8 total, 0 up, 8 in 2026-03-06T23:40:03.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:02.353152+0000 mon.vm07 (mon.1) 12 : audit [INF] from='osd.0 [v2:192.168.123.107:6800/1132330073,v1:192.168.123.107:6801/1132330073]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-06T23:40:03.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:02.353152+0000 mon.vm07 (mon.1) 12 : audit [INF] from='osd.0 [v2:192.168.123.107:6800/1132330073,v1:192.168.123.107:6801/1132330073]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-06T23:40:03.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:02.358817+0000 mon.vm02 (mon.0) 477 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:40:03.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:02.358817+0000 mon.vm02 (mon.0) 477 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:40:03.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:02.358958+0000 mon.vm02 (mon.0) 478 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-06T23:40:03.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:02.358958+0000 mon.vm02 (mon.0) 478 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-06T23:40:03.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:02.359055+0000 mon.vm02 (mon.0) 479 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:40:03.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:02.359055+0000 mon.vm02 (mon.0) 479 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:40:03.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:02.359091+0000 mon.vm02 (mon.0) 480 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:40:03.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:02.359091+0000 mon.vm02 (mon.0) 480 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:40:03.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:02.359123+0000 mon.vm02 (mon.0) 481 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:40:03.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:02.359123+0000 mon.vm02 (mon.0) 481 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:40:03.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:02.359152+0000 mon.vm02 (mon.0) 482 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:03.494 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:02.359152+0000 mon.vm02 (mon.0) 482 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:03.494 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:02.359181+0000 mon.vm02 (mon.0) 483 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:40:03.494 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:02.359181+0000 mon.vm02 (mon.0) 483 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:40:03.494 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:02.359213+0000 mon.vm02 (mon.0) 484 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:40:03.494 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:02.359213+0000 mon.vm02 (mon.0) 484 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:40:03.494 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:02.359243+0000 mon.vm02 (mon.0) 485 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:03.494 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:02.359243+0000 mon.vm02 (mon.0) 485 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:03.494 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:02.360709+0000 mon.vm02 (mon.0) 486 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:40:03.494 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:02.360709+0000 mon.vm02 (mon.0) 486 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:40:03.494 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:03.333765+0000 mon.vm02 (mon.0) 487 : audit [INF] from='osd.1 [v2:192.168.123.102:6802/1450039054,v1:192.168.123.102:6803/1450039054]' entity='osd.1' 2026-03-06T23:40:03.494 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:03.333765+0000 mon.vm02 (mon.0) 487 : audit [INF] from='osd.1 [v2:192.168.123.102:6802/1450039054,v1:192.168.123.102:6803/1450039054]' entity='osd.1' 2026-03-06T23:40:03.494 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:03.347987+0000 mon.vm02 (mon.0) 488 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-06T23:40:03.494 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:03.347987+0000 mon.vm02 (mon.0) 488 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-06T23:40:03.494 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: cluster 2026-03-06T22:40:03.350880+0000 mon.vm02 (mon.0) 489 : cluster [INF] osd.1 [v2:192.168.123.102:6802/1450039054,v1:192.168.123.102:6803/1450039054] boot 2026-03-06T23:40:03.494 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: cluster 2026-03-06T22:40:03.350880+0000 mon.vm02 (mon.0) 489 : cluster [INF] osd.1 [v2:192.168.123.102:6802/1450039054,v1:192.168.123.102:6803/1450039054] boot 2026-03-06T23:40:03.494 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: cluster 2026-03-06T22:40:03.351004+0000 mon.vm02 (mon.0) 490 : cluster [DBG] osdmap e16: 8 total, 1 up, 8 in 2026-03-06T23:40:03.494 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: cluster 2026-03-06T22:40:03.351004+0000 mon.vm02 (mon.0) 490 : cluster [DBG] osdmap e16: 8 total, 1 up, 8 in 2026-03-06T23:40:03.494 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:03.351168+0000 mon.vm02 (mon.0) 491 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:40:03.494 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:03.351168+0000 mon.vm02 (mon.0) 491 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:40:03.494 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:03.351240+0000 mon.vm02 (mon.0) 492 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:40:03.494 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:03.351240+0000 mon.vm02 (mon.0) 492 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:40:03.494 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:03.351505+0000 mon.vm02 (mon.0) 493 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:40:03.494 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:03.351505+0000 mon.vm02 (mon.0) 493 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:40:03.494 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:03.351612+0000 mon.vm02 (mon.0) 494 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:40:03.494 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:03.351612+0000 mon.vm02 (mon.0) 494 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:40:03.494 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:03.351649+0000 mon.vm02 (mon.0) 495 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:03.494 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:03.351649+0000 mon.vm02 (mon.0) 495 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:03.494 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:03.351680+0000 mon.vm02 (mon.0) 496 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:40:03.494 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:03.351680+0000 mon.vm02 (mon.0) 496 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:40:03.494 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:03.351889+0000 mon.vm02 (mon.0) 497 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:40:03.494 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:03.351889+0000 mon.vm02 (mon.0) 497 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:40:03.494 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:03.351926+0000 mon.vm02 (mon.0) 498 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:03.494 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:03.351926+0000 mon.vm02 (mon.0) 498 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:03.494 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:03.358671+0000 mon.vm02 (mon.0) 499 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:40:03.494 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:03 vm02 bash[17013]: audit 2026-03-06T22:40:03.358671+0000 mon.vm02 (mon.0) 499 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:40:03.681 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:02.345313+0000 mon.vm02 (mon.0) 474 : audit [INF] from='osd.1 [v2:192.168.123.102:6802/1450039054,v1:192.168.123.102:6803/1450039054]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-06T23:40:03.682 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:02.345313+0000 mon.vm02 (mon.0) 474 : audit [INF] from='osd.1 [v2:192.168.123.102:6802/1450039054,v1:192.168.123.102:6803/1450039054]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-06T23:40:03.682 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:02.345425+0000 mon.vm02 (mon.0) 475 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-06T23:40:03.682 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:02.345425+0000 mon.vm02 (mon.0) 475 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-06T23:40:03.682 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: cluster 2026-03-06T22:40:02.351840+0000 mon.vm02 (mon.0) 476 : cluster [DBG] osdmap e15: 8 total, 0 up, 8 in 2026-03-06T23:40:03.682 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: cluster 2026-03-06T22:40:02.351840+0000 mon.vm02 (mon.0) 476 : cluster [DBG] osdmap e15: 8 total, 0 up, 8 in 2026-03-06T23:40:03.682 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:02.353152+0000 mon.vm07 (mon.1) 12 : audit [INF] from='osd.0 [v2:192.168.123.107:6800/1132330073,v1:192.168.123.107:6801/1132330073]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-06T23:40:03.682 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:02.353152+0000 mon.vm07 (mon.1) 12 : audit [INF] from='osd.0 [v2:192.168.123.107:6800/1132330073,v1:192.168.123.107:6801/1132330073]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-06T23:40:03.682 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:02.358817+0000 mon.vm02 (mon.0) 477 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:40:03.682 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:02.358817+0000 mon.vm02 (mon.0) 477 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:40:03.682 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:02.358958+0000 mon.vm02 (mon.0) 478 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-06T23:40:03.682 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:02.358958+0000 mon.vm02 (mon.0) 478 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-06T23:40:03.682 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:02.359055+0000 mon.vm02 (mon.0) 479 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:40:03.682 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:02.359055+0000 mon.vm02 (mon.0) 479 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:40:03.682 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:02.359091+0000 mon.vm02 (mon.0) 480 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:40:03.682 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:02.359091+0000 mon.vm02 (mon.0) 480 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:40:03.682 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:02.359123+0000 mon.vm02 (mon.0) 481 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:40:03.682 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:02.359123+0000 mon.vm02 (mon.0) 481 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:40:03.682 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:02.359152+0000 mon.vm02 (mon.0) 482 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:03.682 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:02.359152+0000 mon.vm02 (mon.0) 482 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:03.682 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:02.359181+0000 mon.vm02 (mon.0) 483 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:40:03.682 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:02.359181+0000 mon.vm02 (mon.0) 483 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:40:03.682 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:02.359213+0000 mon.vm02 (mon.0) 484 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:40:03.682 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:02.359213+0000 mon.vm02 (mon.0) 484 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:40:03.682 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:02.359243+0000 mon.vm02 (mon.0) 485 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:03.682 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:02.359243+0000 mon.vm02 (mon.0) 485 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:03.682 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:02.360709+0000 mon.vm02 (mon.0) 486 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:40:03.682 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:02.360709+0000 mon.vm02 (mon.0) 486 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:40:03.682 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:03.333765+0000 mon.vm02 (mon.0) 487 : audit [INF] from='osd.1 [v2:192.168.123.102:6802/1450039054,v1:192.168.123.102:6803/1450039054]' entity='osd.1' 2026-03-06T23:40:03.682 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:03.333765+0000 mon.vm02 (mon.0) 487 : audit [INF] from='osd.1 [v2:192.168.123.102:6802/1450039054,v1:192.168.123.102:6803/1450039054]' entity='osd.1' 2026-03-06T23:40:03.682 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:03.347987+0000 mon.vm02 (mon.0) 488 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-06T23:40:03.682 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:03.347987+0000 mon.vm02 (mon.0) 488 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-06T23:40:03.682 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: cluster 2026-03-06T22:40:03.350880+0000 mon.vm02 (mon.0) 489 : cluster [INF] osd.1 [v2:192.168.123.102:6802/1450039054,v1:192.168.123.102:6803/1450039054] boot 2026-03-06T23:40:03.682 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: cluster 2026-03-06T22:40:03.350880+0000 mon.vm02 (mon.0) 489 : cluster [INF] osd.1 [v2:192.168.123.102:6802/1450039054,v1:192.168.123.102:6803/1450039054] boot 2026-03-06T23:40:03.682 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: cluster 2026-03-06T22:40:03.351004+0000 mon.vm02 (mon.0) 490 : cluster [DBG] osdmap e16: 8 total, 1 up, 8 in 2026-03-06T23:40:03.682 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: cluster 2026-03-06T22:40:03.351004+0000 mon.vm02 (mon.0) 490 : cluster [DBG] osdmap e16: 8 total, 1 up, 8 in 2026-03-06T23:40:03.682 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:03.351168+0000 mon.vm02 (mon.0) 491 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:40:03.682 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:03.351168+0000 mon.vm02 (mon.0) 491 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:40:03.683 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:03.351240+0000 mon.vm02 (mon.0) 492 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:40:03.683 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:03.351240+0000 mon.vm02 (mon.0) 492 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-06T23:40:03.683 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:03.351505+0000 mon.vm02 (mon.0) 493 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:40:03.683 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:03.351505+0000 mon.vm02 (mon.0) 493 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:40:03.683 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:03.351612+0000 mon.vm02 (mon.0) 494 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:40:03.683 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:03.351612+0000 mon.vm02 (mon.0) 494 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:40:03.683 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:03.351649+0000 mon.vm02 (mon.0) 495 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:03.683 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:03.351649+0000 mon.vm02 (mon.0) 495 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:03.683 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:03.351680+0000 mon.vm02 (mon.0) 496 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:40:03.683 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:03.351680+0000 mon.vm02 (mon.0) 496 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:40:03.683 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:03.351889+0000 mon.vm02 (mon.0) 497 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:40:03.683 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:03.351889+0000 mon.vm02 (mon.0) 497 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:40:03.683 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:03.351926+0000 mon.vm02 (mon.0) 498 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:03.683 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:03.351926+0000 mon.vm02 (mon.0) 498 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:03.683 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:03.358671+0000 mon.vm02 (mon.0) 499 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:40:03.683 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:03 vm07 bash[20848]: audit 2026-03-06T22:40:03.358671+0000 mon.vm02 (mon.0) 499 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:40:04.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:04 vm07 bash[20848]: cluster 2026-03-06T22:40:01.875378+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-06T23:40:04.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:04 vm07 bash[20848]: cluster 2026-03-06T22:40:01.875378+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-06T23:40:04.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:04 vm07 bash[20848]: cluster 2026-03-06T22:40:01.875500+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-06T23:40:04.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:04 vm07 bash[20848]: cluster 2026-03-06T22:40:01.875500+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-06T23:40:04.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:04 vm07 bash[20848]: cluster 2026-03-06T22:40:02.350069+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-06T23:40:04.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:04 vm07 bash[20848]: cluster 2026-03-06T22:40:02.350069+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-06T23:40:04.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:04 vm07 bash[20848]: cluster 2026-03-06T22:40:02.350160+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-06T23:40:04.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:04 vm07 bash[20848]: cluster 2026-03-06T22:40:02.350160+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-06T23:40:04.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:04 vm07 bash[20848]: cluster 2026-03-06T22:40:02.752263+0000 mgr.vm02.opvwec (mgr.14199) 91 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:40:04.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:04 vm07 bash[20848]: cluster 2026-03-06T22:40:02.752263+0000 mgr.vm02.opvwec (mgr.14199) 91 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:40:04.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:04 vm07 bash[20848]: audit 2026-03-06T22:40:03.636667+0000 mon.vm02 (mon.0) 500 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-06T23:40:04.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:04 vm07 bash[20848]: audit 2026-03-06T22:40:03.636667+0000 mon.vm02 (mon.0) 500 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-06T23:40:04.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:04 vm07 bash[20848]: audit 2026-03-06T22:40:03.639911+0000 mon.vm07 (mon.1) 13 : audit [INF] from='osd.3 [v2:192.168.123.102:6810/1555906781,v1:192.168.123.102:6811/1555906781]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-06T23:40:04.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:04 vm07 bash[20848]: audit 2026-03-06T22:40:03.639911+0000 mon.vm07 (mon.1) 13 : audit [INF] from='osd.3 [v2:192.168.123.102:6810/1555906781,v1:192.168.123.102:6811/1555906781]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-06T23:40:04.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:04 vm07 bash[20848]: audit 2026-03-06T22:40:04.040215+0000 mon.vm02 (mon.0) 501 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-06T23:40:04.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:04 vm07 bash[20848]: audit 2026-03-06T22:40:04.040215+0000 mon.vm02 (mon.0) 501 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-06T23:40:04.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:04 vm07 bash[20848]: audit 2026-03-06T22:40:04.043455+0000 mon.vm07 (mon.1) 14 : audit [INF] from='osd.2 [v2:192.168.123.107:6808/3130883557,v1:192.168.123.107:6809/3130883557]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-06T23:40:04.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:04 vm07 bash[20848]: audit 2026-03-06T22:40:04.043455+0000 mon.vm07 (mon.1) 14 : audit [INF] from='osd.2 [v2:192.168.123.107:6808/3130883557,v1:192.168.123.107:6809/3130883557]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-06T23:40:04.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:04 vm07 bash[20848]: audit 2026-03-06T22:40:04.298330+0000 mon.vm02 (mon.0) 502 : audit [INF] from='osd.0 ' entity='osd.0' 2026-03-06T23:40:04.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:04 vm07 bash[20848]: audit 2026-03-06T22:40:04.298330+0000 mon.vm02 (mon.0) 502 : audit [INF] from='osd.0 ' entity='osd.0' 2026-03-06T23:40:04.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:04 vm07 bash[20848]: audit 2026-03-06T22:40:04.358670+0000 mon.vm02 (mon.0) 503 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:40:04.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:04 vm07 bash[20848]: audit 2026-03-06T22:40:04.358670+0000 mon.vm02 (mon.0) 503 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:40:04.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:04 vm02 bash[17013]: cluster 2026-03-06T22:40:01.875378+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-06T23:40:04.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:04 vm02 bash[17013]: cluster 2026-03-06T22:40:01.875378+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-06T23:40:04.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:04 vm02 bash[17013]: cluster 2026-03-06T22:40:01.875500+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-06T23:40:04.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:04 vm02 bash[17013]: cluster 2026-03-06T22:40:01.875500+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-06T23:40:04.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:04 vm02 bash[17013]: cluster 2026-03-06T22:40:02.350069+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-06T23:40:04.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:04 vm02 bash[17013]: cluster 2026-03-06T22:40:02.350069+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-06T23:40:04.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:04 vm02 bash[17013]: cluster 2026-03-06T22:40:02.350160+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-06T23:40:04.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:04 vm02 bash[17013]: cluster 2026-03-06T22:40:02.350160+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-06T23:40:04.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:04 vm02 bash[17013]: cluster 2026-03-06T22:40:02.752263+0000 mgr.vm02.opvwec (mgr.14199) 91 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:40:04.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:04 vm02 bash[17013]: cluster 2026-03-06T22:40:02.752263+0000 mgr.vm02.opvwec (mgr.14199) 91 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-06T23:40:04.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:04 vm02 bash[17013]: audit 2026-03-06T22:40:03.636667+0000 mon.vm02 (mon.0) 500 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-06T23:40:04.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:04 vm02 bash[17013]: audit 2026-03-06T22:40:03.636667+0000 mon.vm02 (mon.0) 500 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-06T23:40:04.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:04 vm02 bash[17013]: audit 2026-03-06T22:40:03.639911+0000 mon.vm07 (mon.1) 13 : audit [INF] from='osd.3 [v2:192.168.123.102:6810/1555906781,v1:192.168.123.102:6811/1555906781]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-06T23:40:04.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:04 vm02 bash[17013]: audit 2026-03-06T22:40:03.639911+0000 mon.vm07 (mon.1) 13 : audit [INF] from='osd.3 [v2:192.168.123.102:6810/1555906781,v1:192.168.123.102:6811/1555906781]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-06T23:40:04.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:04 vm02 bash[17013]: audit 2026-03-06T22:40:04.040215+0000 mon.vm02 (mon.0) 501 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-06T23:40:04.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:04 vm02 bash[17013]: audit 2026-03-06T22:40:04.040215+0000 mon.vm02 (mon.0) 501 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-06T23:40:04.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:04 vm02 bash[17013]: audit 2026-03-06T22:40:04.043455+0000 mon.vm07 (mon.1) 14 : audit [INF] from='osd.2 [v2:192.168.123.107:6808/3130883557,v1:192.168.123.107:6809/3130883557]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-06T23:40:04.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:04 vm02 bash[17013]: audit 2026-03-06T22:40:04.043455+0000 mon.vm07 (mon.1) 14 : audit [INF] from='osd.2 [v2:192.168.123.107:6808/3130883557,v1:192.168.123.107:6809/3130883557]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-06T23:40:04.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:04 vm02 bash[17013]: audit 2026-03-06T22:40:04.298330+0000 mon.vm02 (mon.0) 502 : audit [INF] from='osd.0 ' entity='osd.0' 2026-03-06T23:40:04.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:04 vm02 bash[17013]: audit 2026-03-06T22:40:04.298330+0000 mon.vm02 (mon.0) 502 : audit [INF] from='osd.0 ' entity='osd.0' 2026-03-06T23:40:04.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:04 vm02 bash[17013]: audit 2026-03-06T22:40:04.358670+0000 mon.vm02 (mon.0) 503 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:40:04.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:04 vm02 bash[17013]: audit 2026-03-06T22:40:04.358670+0000 mon.vm02 (mon.0) 503 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:40:05.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:05 vm07 bash[20848]: audit 2026-03-06T22:40:04.404195+0000 mon.vm02 (mon.0) 504 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-06T23:40:05.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:05 vm07 bash[20848]: audit 2026-03-06T22:40:04.404195+0000 mon.vm02 (mon.0) 504 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-06T23:40:05.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:05 vm07 bash[20848]: audit 2026-03-06T22:40:04.404326+0000 mon.vm02 (mon.0) 505 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-06T23:40:05.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:05 vm07 bash[20848]: audit 2026-03-06T22:40:04.404326+0000 mon.vm02 (mon.0) 505 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-06T23:40:05.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:05 vm07 bash[20848]: cluster 2026-03-06T22:40:04.407116+0000 mon.vm02 (mon.0) 506 : cluster [INF] osd.0 [v2:192.168.123.107:6800/1132330073,v1:192.168.123.107:6801/1132330073] boot 2026-03-06T23:40:05.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:05 vm07 bash[20848]: cluster 2026-03-06T22:40:04.407116+0000 mon.vm02 (mon.0) 506 : cluster [INF] osd.0 [v2:192.168.123.107:6800/1132330073,v1:192.168.123.107:6801/1132330073] boot 2026-03-06T23:40:05.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:05 vm07 bash[20848]: cluster 2026-03-06T22:40:04.407267+0000 mon.vm02 (mon.0) 507 : cluster [DBG] osdmap e17: 8 total, 2 up, 8 in 2026-03-06T23:40:05.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:05 vm07 bash[20848]: cluster 2026-03-06T22:40:04.407267+0000 mon.vm02 (mon.0) 507 : cluster [DBG] osdmap e17: 8 total, 2 up, 8 in 2026-03-06T23:40:05.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:05 vm07 bash[20848]: audit 2026-03-06T22:40:04.407524+0000 mon.vm02 (mon.0) 508 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:40:05.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:05 vm07 bash[20848]: audit 2026-03-06T22:40:04.407524+0000 mon.vm02 (mon.0) 508 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:40:05.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:05 vm07 bash[20848]: audit 2026-03-06T22:40:04.407593+0000 mon.vm02 (mon.0) 509 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:40:05.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:05 vm07 bash[20848]: audit 2026-03-06T22:40:04.407593+0000 mon.vm02 (mon.0) 509 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:40:05.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:05 vm07 bash[20848]: audit 2026-03-06T22:40:04.407625+0000 mon.vm02 (mon.0) 510 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:40:05.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:05 vm07 bash[20848]: audit 2026-03-06T22:40:04.407625+0000 mon.vm02 (mon.0) 510 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:40:05.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:05 vm07 bash[20848]: audit 2026-03-06T22:40:04.407750+0000 mon.vm02 (mon.0) 511 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:05.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:05 vm07 bash[20848]: audit 2026-03-06T22:40:04.407750+0000 mon.vm02 (mon.0) 511 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:05.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:05 vm07 bash[20848]: audit 2026-03-06T22:40:04.407902+0000 mon.vm02 (mon.0) 512 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:40:05.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:05 vm07 bash[20848]: audit 2026-03-06T22:40:04.407902+0000 mon.vm02 (mon.0) 512 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:40:05.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:05 vm07 bash[20848]: audit 2026-03-06T22:40:04.407965+0000 mon.vm02 (mon.0) 513 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:40:05.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:05 vm07 bash[20848]: audit 2026-03-06T22:40:04.407965+0000 mon.vm02 (mon.0) 513 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:40:05.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:05 vm07 bash[20848]: audit 2026-03-06T22:40:04.408066+0000 mon.vm02 (mon.0) 514 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:05.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:05 vm07 bash[20848]: audit 2026-03-06T22:40:04.408066+0000 mon.vm02 (mon.0) 514 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:05.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:05 vm07 bash[20848]: audit 2026-03-06T22:40:04.414640+0000 mon.vm02 (mon.0) 515 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-06T23:40:05.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:05 vm07 bash[20848]: audit 2026-03-06T22:40:04.414640+0000 mon.vm02 (mon.0) 515 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-06T23:40:05.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:05 vm07 bash[20848]: audit 2026-03-06T22:40:04.415012+0000 mon.vm02 (mon.0) 516 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-06T23:40:05.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:05 vm07 bash[20848]: audit 2026-03-06T22:40:04.415012+0000 mon.vm02 (mon.0) 516 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-06T23:40:05.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:05 vm07 bash[20848]: audit 2026-03-06T22:40:04.418035+0000 mon.vm07 (mon.1) 15 : audit [INF] from='osd.3 [v2:192.168.123.102:6810/1555906781,v1:192.168.123.102:6811/1555906781]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-06T23:40:05.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:05 vm07 bash[20848]: audit 2026-03-06T22:40:04.418035+0000 mon.vm07 (mon.1) 15 : audit [INF] from='osd.3 [v2:192.168.123.102:6810/1555906781,v1:192.168.123.102:6811/1555906781]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-06T23:40:05.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:05 vm07 bash[20848]: audit 2026-03-06T22:40:04.418475+0000 mon.vm07 (mon.1) 16 : audit [INF] from='osd.2 [v2:192.168.123.107:6808/3130883557,v1:192.168.123.107:6809/3130883557]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-06T23:40:05.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:05 vm07 bash[20848]: audit 2026-03-06T22:40:04.418475+0000 mon.vm07 (mon.1) 16 : audit [INF] from='osd.2 [v2:192.168.123.107:6808/3130883557,v1:192.168.123.107:6809/3130883557]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-06T23:40:05.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:05 vm07 bash[20848]: audit 2026-03-06T22:40:04.937837+0000 mon.vm02 (mon.0) 517 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-06T23:40:05.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:05 vm07 bash[20848]: audit 2026-03-06T22:40:04.937837+0000 mon.vm02 (mon.0) 517 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-06T23:40:05.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:05 vm07 bash[20848]: audit 2026-03-06T22:40:04.941174+0000 mon.vm07 (mon.1) 17 : audit [INF] from='osd.5 [v2:192.168.123.107:6816/1931549358,v1:192.168.123.107:6817/1931549358]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-06T23:40:05.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:05 vm07 bash[20848]: audit 2026-03-06T22:40:04.941174+0000 mon.vm07 (mon.1) 17 : audit [INF] from='osd.5 [v2:192.168.123.107:6816/1931549358,v1:192.168.123.107:6817/1931549358]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-06T23:40:05.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:05 vm02 bash[17013]: audit 2026-03-06T22:40:04.404195+0000 mon.vm02 (mon.0) 504 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-06T23:40:05.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:05 vm02 bash[17013]: audit 2026-03-06T22:40:04.404195+0000 mon.vm02 (mon.0) 504 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-06T23:40:05.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:05 vm02 bash[17013]: audit 2026-03-06T22:40:04.404326+0000 mon.vm02 (mon.0) 505 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-06T23:40:05.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:05 vm02 bash[17013]: audit 2026-03-06T22:40:04.404326+0000 mon.vm02 (mon.0) 505 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-06T23:40:05.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:05 vm02 bash[17013]: cluster 2026-03-06T22:40:04.407116+0000 mon.vm02 (mon.0) 506 : cluster [INF] osd.0 [v2:192.168.123.107:6800/1132330073,v1:192.168.123.107:6801/1132330073] boot 2026-03-06T23:40:05.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:05 vm02 bash[17013]: cluster 2026-03-06T22:40:04.407116+0000 mon.vm02 (mon.0) 506 : cluster [INF] osd.0 [v2:192.168.123.107:6800/1132330073,v1:192.168.123.107:6801/1132330073] boot 2026-03-06T23:40:05.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:05 vm02 bash[17013]: cluster 2026-03-06T22:40:04.407267+0000 mon.vm02 (mon.0) 507 : cluster [DBG] osdmap e17: 8 total, 2 up, 8 in 2026-03-06T23:40:05.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:05 vm02 bash[17013]: cluster 2026-03-06T22:40:04.407267+0000 mon.vm02 (mon.0) 507 : cluster [DBG] osdmap e17: 8 total, 2 up, 8 in 2026-03-06T23:40:05.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:05 vm02 bash[17013]: audit 2026-03-06T22:40:04.407524+0000 mon.vm02 (mon.0) 508 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:40:05.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:05 vm02 bash[17013]: audit 2026-03-06T22:40:04.407524+0000 mon.vm02 (mon.0) 508 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-06T23:40:05.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:05 vm02 bash[17013]: audit 2026-03-06T22:40:04.407593+0000 mon.vm02 (mon.0) 509 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:40:05.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:05 vm02 bash[17013]: audit 2026-03-06T22:40:04.407593+0000 mon.vm02 (mon.0) 509 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:40:05.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:05 vm02 bash[17013]: audit 2026-03-06T22:40:04.407625+0000 mon.vm02 (mon.0) 510 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:40:05.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:05 vm02 bash[17013]: audit 2026-03-06T22:40:04.407625+0000 mon.vm02 (mon.0) 510 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:40:05.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:05 vm02 bash[17013]: audit 2026-03-06T22:40:04.407750+0000 mon.vm02 (mon.0) 511 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:05.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:05 vm02 bash[17013]: audit 2026-03-06T22:40:04.407750+0000 mon.vm02 (mon.0) 511 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:05.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:05 vm02 bash[17013]: audit 2026-03-06T22:40:04.407902+0000 mon.vm02 (mon.0) 512 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:40:05.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:05 vm02 bash[17013]: audit 2026-03-06T22:40:04.407902+0000 mon.vm02 (mon.0) 512 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:40:05.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:05 vm02 bash[17013]: audit 2026-03-06T22:40:04.407965+0000 mon.vm02 (mon.0) 513 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:40:05.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:05 vm02 bash[17013]: audit 2026-03-06T22:40:04.407965+0000 mon.vm02 (mon.0) 513 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:40:05.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:05 vm02 bash[17013]: audit 2026-03-06T22:40:04.408066+0000 mon.vm02 (mon.0) 514 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:05.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:05 vm02 bash[17013]: audit 2026-03-06T22:40:04.408066+0000 mon.vm02 (mon.0) 514 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:05.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:05 vm02 bash[17013]: audit 2026-03-06T22:40:04.414640+0000 mon.vm02 (mon.0) 515 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-06T23:40:05.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:05 vm02 bash[17013]: audit 2026-03-06T22:40:04.414640+0000 mon.vm02 (mon.0) 515 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-06T23:40:05.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:05 vm02 bash[17013]: audit 2026-03-06T22:40:04.415012+0000 mon.vm02 (mon.0) 516 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-06T23:40:05.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:05 vm02 bash[17013]: audit 2026-03-06T22:40:04.415012+0000 mon.vm02 (mon.0) 516 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-06T23:40:05.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:05 vm02 bash[17013]: audit 2026-03-06T22:40:04.418035+0000 mon.vm07 (mon.1) 15 : audit [INF] from='osd.3 [v2:192.168.123.102:6810/1555906781,v1:192.168.123.102:6811/1555906781]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-06T23:40:05.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:05 vm02 bash[17013]: audit 2026-03-06T22:40:04.418035+0000 mon.vm07 (mon.1) 15 : audit [INF] from='osd.3 [v2:192.168.123.102:6810/1555906781,v1:192.168.123.102:6811/1555906781]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-06T23:40:05.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:05 vm02 bash[17013]: audit 2026-03-06T22:40:04.418475+0000 mon.vm07 (mon.1) 16 : audit [INF] from='osd.2 [v2:192.168.123.107:6808/3130883557,v1:192.168.123.107:6809/3130883557]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-06T23:40:05.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:05 vm02 bash[17013]: audit 2026-03-06T22:40:04.418475+0000 mon.vm07 (mon.1) 16 : audit [INF] from='osd.2 [v2:192.168.123.107:6808/3130883557,v1:192.168.123.107:6809/3130883557]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-06T23:40:05.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:05 vm02 bash[17013]: audit 2026-03-06T22:40:04.937837+0000 mon.vm02 (mon.0) 517 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-06T23:40:05.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:05 vm02 bash[17013]: audit 2026-03-06T22:40:04.937837+0000 mon.vm02 (mon.0) 517 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-06T23:40:05.744 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:05 vm02 bash[17013]: audit 2026-03-06T22:40:04.941174+0000 mon.vm07 (mon.1) 17 : audit [INF] from='osd.5 [v2:192.168.123.107:6816/1931549358,v1:192.168.123.107:6817/1931549358]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-06T23:40:05.744 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:05 vm02 bash[17013]: audit 2026-03-06T22:40:04.941174+0000 mon.vm07 (mon.1) 17 : audit [INF] from='osd.5 [v2:192.168.123.107:6816/1931549358,v1:192.168.123.107:6817/1931549358]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-06T23:40:06.541 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:40:06.556 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: cluster 2026-03-06T22:40:04.752506+0000 mgr.vm02.opvwec (mgr.14199) 92 : cluster [DBG] pgmap v42: 0 pgs: ; 0 B data, 282 MiB used, 20 GiB / 20 GiB avail 2026-03-06T23:40:06.556 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: cluster 2026-03-06T22:40:04.752506+0000 mgr.vm02.opvwec (mgr.14199) 92 : cluster [DBG] pgmap v42: 0 pgs: ; 0 B data, 282 MiB used, 20 GiB / 20 GiB avail 2026-03-06T23:40:06.556 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:05.407309+0000 mon.vm02 (mon.0) 518 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-06T23:40:06.556 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:05.407309+0000 mon.vm02 (mon.0) 518 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-06T23:40:06.556 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:05.407456+0000 mon.vm02 (mon.0) 519 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-06T23:40:06.556 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:05.407456+0000 mon.vm02 (mon.0) 519 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-06T23:40:06.556 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:05.407604+0000 mon.vm02 (mon.0) 520 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:05.407604+0000 mon.vm02 (mon.0) 520 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: cluster 2026-03-06T22:40:05.414324+0000 mon.vm02 (mon.0) 521 : cluster [DBG] osdmap e18: 8 total, 2 up, 8 in 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: cluster 2026-03-06T22:40:05.414324+0000 mon.vm02 (mon.0) 521 : cluster [DBG] osdmap e18: 8 total, 2 up, 8 in 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:05.418496+0000 mon.vm07 (mon.1) 18 : audit [INF] from='osd.5 [v2:192.168.123.107:6816/1931549358,v1:192.168.123.107:6817/1931549358]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:05.418496+0000 mon.vm07 (mon.1) 18 : audit [INF] from='osd.5 [v2:192.168.123.107:6816/1931549358,v1:192.168.123.107:6817/1931549358]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:05.421194+0000 mon.vm02 (mon.0) 522 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:05.421194+0000 mon.vm02 (mon.0) 522 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:05.421347+0000 mon.vm02 (mon.0) 523 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:05.421347+0000 mon.vm02 (mon.0) 523 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:05.421484+0000 mon.vm02 (mon.0) 524 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:05.421484+0000 mon.vm02 (mon.0) 524 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:05.421563+0000 mon.vm02 (mon.0) 525 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:05.421563+0000 mon.vm02 (mon.0) 525 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:05.421617+0000 mon.vm02 (mon.0) 526 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:05.421617+0000 mon.vm02 (mon.0) 526 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:05.421669+0000 mon.vm02 (mon.0) 527 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:05.421669+0000 mon.vm02 (mon.0) 527 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:05.421719+0000 mon.vm02 (mon.0) 528 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:05.421719+0000 mon.vm02 (mon.0) 528 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:05.463802+0000 mon.vm02 (mon.0) 529 : audit [INF] from='osd.4 [v2:192.168.123.102:6818/2614111495,v1:192.168.123.102:6819/2614111495]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:05.463802+0000 mon.vm02 (mon.0) 529 : audit [INF] from='osd.4 [v2:192.168.123.102:6818/2614111495,v1:192.168.123.102:6819/2614111495]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:05.799606+0000 mon.vm02 (mon.0) 530 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:05.799606+0000 mon.vm02 (mon.0) 530 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:05.934944+0000 mon.vm02 (mon.0) 531 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:05.934944+0000 mon.vm02 (mon.0) 531 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:05.938295+0000 mon.vm07 (mon.1) 19 : audit [INF] from='osd.6 [v2:192.168.123.107:6824/2542357446,v1:192.168.123.107:6825/2542357446]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:05.938295+0000 mon.vm07 (mon.1) 19 : audit [INF] from='osd.6 [v2:192.168.123.107:6824/2542357446,v1:192.168.123.107:6825/2542357446]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:06.266214+0000 mon.vm02 (mon.0) 532 : audit [INF] from='osd.7 [v2:192.168.123.102:6826/322732066,v1:192.168.123.102:6827/322732066]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:06.266214+0000 mon.vm02 (mon.0) 532 : audit [INF] from='osd.7 [v2:192.168.123.102:6826/322732066,v1:192.168.123.102:6827/322732066]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:06.321542+0000 mon.vm02 (mon.0) 533 : audit [INF] from='osd.2 ' entity='osd.2' 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:06.321542+0000 mon.vm02 (mon.0) 533 : audit [INF] from='osd.2 ' entity='osd.2' 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:06.410299+0000 mon.vm02 (mon.0) 534 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:06.410299+0000 mon.vm02 (mon.0) 534 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:06.410422+0000 mon.vm02 (mon.0) 535 : audit [INF] from='osd.4 [v2:192.168.123.102:6818/2614111495,v1:192.168.123.102:6819/2614111495]' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:06.410422+0000 mon.vm02 (mon.0) 535 : audit [INF] from='osd.4 [v2:192.168.123.102:6818/2614111495,v1:192.168.123.102:6819/2614111495]' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:06.410538+0000 mon.vm02 (mon.0) 536 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:06.410538+0000 mon.vm02 (mon.0) 536 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:06.410727+0000 mon.vm02 (mon.0) 537 : audit [INF] from='osd.7 [v2:192.168.123.102:6826/322732066,v1:192.168.123.102:6827/322732066]' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:06.410727+0000 mon.vm02 (mon.0) 537 : audit [INF] from='osd.7 [v2:192.168.123.102:6826/322732066,v1:192.168.123.102:6827/322732066]' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: cluster 2026-03-06T22:40:06.413246+0000 mon.vm02 (mon.0) 538 : cluster [INF] osd.3 [v2:192.168.123.102:6810/1555906781,v1:192.168.123.102:6811/1555906781] boot 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: cluster 2026-03-06T22:40:06.413246+0000 mon.vm02 (mon.0) 538 : cluster [INF] osd.3 [v2:192.168.123.102:6810/1555906781,v1:192.168.123.102:6811/1555906781] boot 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: cluster 2026-03-06T22:40:06.413306+0000 mon.vm02 (mon.0) 539 : cluster [INF] osd.2 [v2:192.168.123.107:6808/3130883557,v1:192.168.123.107:6809/3130883557] boot 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: cluster 2026-03-06T22:40:06.413306+0000 mon.vm02 (mon.0) 539 : cluster [INF] osd.2 [v2:192.168.123.107:6808/3130883557,v1:192.168.123.107:6809/3130883557] boot 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: cluster 2026-03-06T22:40:06.413489+0000 mon.vm02 (mon.0) 540 : cluster [DBG] osdmap e19: 8 total, 4 up, 8 in 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: cluster 2026-03-06T22:40:06.413489+0000 mon.vm02 (mon.0) 540 : cluster [DBG] osdmap e19: 8 total, 4 up, 8 in 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:06.413723+0000 mon.vm02 (mon.0) 541 : audit [INF] from='osd.4 [v2:192.168.123.102:6818/2614111495,v1:192.168.123.102:6819/2614111495]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:06.413723+0000 mon.vm02 (mon.0) 541 : audit [INF] from='osd.4 [v2:192.168.123.102:6818/2614111495,v1:192.168.123.102:6819/2614111495]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:06.413849+0000 mon.vm02 (mon.0) 542 : audit [INF] from='osd.7 [v2:192.168.123.102:6826/322732066,v1:192.168.123.102:6827/322732066]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:06.413849+0000 mon.vm02 (mon.0) 542 : audit [INF] from='osd.7 [v2:192.168.123.102:6826/322732066,v1:192.168.123.102:6827/322732066]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:06.414058+0000 mon.vm02 (mon.0) 543 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:06.414058+0000 mon.vm02 (mon.0) 543 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:06.414215+0000 mon.vm02 (mon.0) 544 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:06.414215+0000 mon.vm02 (mon.0) 544 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:06.414534+0000 mon.vm02 (mon.0) 545 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:06.557 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:06.414534+0000 mon.vm02 (mon.0) 545 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:06.558 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:06.414713+0000 mon.vm02 (mon.0) 546 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:40:06.558 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:06.414713+0000 mon.vm02 (mon.0) 546 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:40:06.558 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:06.414748+0000 mon.vm02 (mon.0) 547 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:40:06.558 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:06.414748+0000 mon.vm02 (mon.0) 547 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:40:06.558 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:06.415023+0000 mon.vm02 (mon.0) 548 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:06.558 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:06.415023+0000 mon.vm02 (mon.0) 548 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:06.558 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:06.418847+0000 mon.vm02 (mon.0) 549 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-06T23:40:06.558 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:06.418847+0000 mon.vm02 (mon.0) 549 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-06T23:40:06.558 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:06.419357+0000 mon.vm02 (mon.0) 550 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:40:06.558 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:06.419357+0000 mon.vm02 (mon.0) 550 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:40:06.558 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:06.420328+0000 mon.vm07 (mon.1) 20 : audit [INF] from='osd.6 [v2:192.168.123.107:6824/2542357446,v1:192.168.123.107:6825/2542357446]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-06T23:40:06.558 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:06 vm02 bash[17013]: audit 2026-03-06T22:40:06.420328+0000 mon.vm07 (mon.1) 20 : audit [INF] from='osd.6 [v2:192.168.123.107:6824/2542357446,v1:192.168.123.107:6825/2542357446]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-06T23:40:06.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: cluster 2026-03-06T22:40:04.752506+0000 mgr.vm02.opvwec (mgr.14199) 92 : cluster [DBG] pgmap v42: 0 pgs: ; 0 B data, 282 MiB used, 20 GiB / 20 GiB avail 2026-03-06T23:40:06.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: cluster 2026-03-06T22:40:04.752506+0000 mgr.vm02.opvwec (mgr.14199) 92 : cluster [DBG] pgmap v42: 0 pgs: ; 0 B data, 282 MiB used, 20 GiB / 20 GiB avail 2026-03-06T23:40:06.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:05.407309+0000 mon.vm02 (mon.0) 518 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-06T23:40:06.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:05.407309+0000 mon.vm02 (mon.0) 518 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-06T23:40:06.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:05.407456+0000 mon.vm02 (mon.0) 519 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-06T23:40:06.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:05.407456+0000 mon.vm02 (mon.0) 519 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-06T23:40:06.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:05.407604+0000 mon.vm02 (mon.0) 520 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:05.407604+0000 mon.vm02 (mon.0) 520 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: cluster 2026-03-06T22:40:05.414324+0000 mon.vm02 (mon.0) 521 : cluster [DBG] osdmap e18: 8 total, 2 up, 8 in 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: cluster 2026-03-06T22:40:05.414324+0000 mon.vm02 (mon.0) 521 : cluster [DBG] osdmap e18: 8 total, 2 up, 8 in 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:05.418496+0000 mon.vm07 (mon.1) 18 : audit [INF] from='osd.5 [v2:192.168.123.107:6816/1931549358,v1:192.168.123.107:6817/1931549358]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:05.418496+0000 mon.vm07 (mon.1) 18 : audit [INF] from='osd.5 [v2:192.168.123.107:6816/1931549358,v1:192.168.123.107:6817/1931549358]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:05.421194+0000 mon.vm02 (mon.0) 522 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:05.421194+0000 mon.vm02 (mon.0) 522 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:05.421347+0000 mon.vm02 (mon.0) 523 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:05.421347+0000 mon.vm02 (mon.0) 523 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:05.421484+0000 mon.vm02 (mon.0) 524 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:05.421484+0000 mon.vm02 (mon.0) 524 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:05.421563+0000 mon.vm02 (mon.0) 525 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:05.421563+0000 mon.vm02 (mon.0) 525 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:05.421617+0000 mon.vm02 (mon.0) 526 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:05.421617+0000 mon.vm02 (mon.0) 526 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:05.421669+0000 mon.vm02 (mon.0) 527 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:05.421669+0000 mon.vm02 (mon.0) 527 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:05.421719+0000 mon.vm02 (mon.0) 528 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:05.421719+0000 mon.vm02 (mon.0) 528 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:05.463802+0000 mon.vm02 (mon.0) 529 : audit [INF] from='osd.4 [v2:192.168.123.102:6818/2614111495,v1:192.168.123.102:6819/2614111495]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:05.463802+0000 mon.vm02 (mon.0) 529 : audit [INF] from='osd.4 [v2:192.168.123.102:6818/2614111495,v1:192.168.123.102:6819/2614111495]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:05.799606+0000 mon.vm02 (mon.0) 530 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:05.799606+0000 mon.vm02 (mon.0) 530 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:05.934944+0000 mon.vm02 (mon.0) 531 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:05.934944+0000 mon.vm02 (mon.0) 531 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:05.938295+0000 mon.vm07 (mon.1) 19 : audit [INF] from='osd.6 [v2:192.168.123.107:6824/2542357446,v1:192.168.123.107:6825/2542357446]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:05.938295+0000 mon.vm07 (mon.1) 19 : audit [INF] from='osd.6 [v2:192.168.123.107:6824/2542357446,v1:192.168.123.107:6825/2542357446]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:06.266214+0000 mon.vm02 (mon.0) 532 : audit [INF] from='osd.7 [v2:192.168.123.102:6826/322732066,v1:192.168.123.102:6827/322732066]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:06.266214+0000 mon.vm02 (mon.0) 532 : audit [INF] from='osd.7 [v2:192.168.123.102:6826/322732066,v1:192.168.123.102:6827/322732066]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:06.321542+0000 mon.vm02 (mon.0) 533 : audit [INF] from='osd.2 ' entity='osd.2' 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:06.321542+0000 mon.vm02 (mon.0) 533 : audit [INF] from='osd.2 ' entity='osd.2' 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:06.410299+0000 mon.vm02 (mon.0) 534 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:06.410299+0000 mon.vm02 (mon.0) 534 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:06.410422+0000 mon.vm02 (mon.0) 535 : audit [INF] from='osd.4 [v2:192.168.123.102:6818/2614111495,v1:192.168.123.102:6819/2614111495]' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:06.410422+0000 mon.vm02 (mon.0) 535 : audit [INF] from='osd.4 [v2:192.168.123.102:6818/2614111495,v1:192.168.123.102:6819/2614111495]' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:06.410538+0000 mon.vm02 (mon.0) 536 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:06.410538+0000 mon.vm02 (mon.0) 536 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:06.410727+0000 mon.vm02 (mon.0) 537 : audit [INF] from='osd.7 [v2:192.168.123.102:6826/322732066,v1:192.168.123.102:6827/322732066]' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:06.410727+0000 mon.vm02 (mon.0) 537 : audit [INF] from='osd.7 [v2:192.168.123.102:6826/322732066,v1:192.168.123.102:6827/322732066]' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: cluster 2026-03-06T22:40:06.413246+0000 mon.vm02 (mon.0) 538 : cluster [INF] osd.3 [v2:192.168.123.102:6810/1555906781,v1:192.168.123.102:6811/1555906781] boot 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: cluster 2026-03-06T22:40:06.413246+0000 mon.vm02 (mon.0) 538 : cluster [INF] osd.3 [v2:192.168.123.102:6810/1555906781,v1:192.168.123.102:6811/1555906781] boot 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: cluster 2026-03-06T22:40:06.413306+0000 mon.vm02 (mon.0) 539 : cluster [INF] osd.2 [v2:192.168.123.107:6808/3130883557,v1:192.168.123.107:6809/3130883557] boot 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: cluster 2026-03-06T22:40:06.413306+0000 mon.vm02 (mon.0) 539 : cluster [INF] osd.2 [v2:192.168.123.107:6808/3130883557,v1:192.168.123.107:6809/3130883557] boot 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: cluster 2026-03-06T22:40:06.413489+0000 mon.vm02 (mon.0) 540 : cluster [DBG] osdmap e19: 8 total, 4 up, 8 in 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: cluster 2026-03-06T22:40:06.413489+0000 mon.vm02 (mon.0) 540 : cluster [DBG] osdmap e19: 8 total, 4 up, 8 in 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:06.413723+0000 mon.vm02 (mon.0) 541 : audit [INF] from='osd.4 [v2:192.168.123.102:6818/2614111495,v1:192.168.123.102:6819/2614111495]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:06.413723+0000 mon.vm02 (mon.0) 541 : audit [INF] from='osd.4 [v2:192.168.123.102:6818/2614111495,v1:192.168.123.102:6819/2614111495]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:06.413849+0000 mon.vm02 (mon.0) 542 : audit [INF] from='osd.7 [v2:192.168.123.102:6826/322732066,v1:192.168.123.102:6827/322732066]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-06T23:40:06.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:06.413849+0000 mon.vm02 (mon.0) 542 : audit [INF] from='osd.7 [v2:192.168.123.102:6826/322732066,v1:192.168.123.102:6827/322732066]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-06T23:40:06.730 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:06.414058+0000 mon.vm02 (mon.0) 543 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:40:06.730 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:06.414058+0000 mon.vm02 (mon.0) 543 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-06T23:40:06.730 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:06.414215+0000 mon.vm02 (mon.0) 544 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:40:06.730 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:06.414215+0000 mon.vm02 (mon.0) 544 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-06T23:40:06.730 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:06.414534+0000 mon.vm02 (mon.0) 545 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:06.730 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:06.414534+0000 mon.vm02 (mon.0) 545 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:06.730 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:06.414713+0000 mon.vm02 (mon.0) 546 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:40:06.730 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:06.414713+0000 mon.vm02 (mon.0) 546 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:40:06.730 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:06.414748+0000 mon.vm02 (mon.0) 547 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:40:06.730 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:06.414748+0000 mon.vm02 (mon.0) 547 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:40:06.730 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:06.415023+0000 mon.vm02 (mon.0) 548 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:06.730 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:06.415023+0000 mon.vm02 (mon.0) 548 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:06.730 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:06.418847+0000 mon.vm02 (mon.0) 549 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-06T23:40:06.730 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:06.418847+0000 mon.vm02 (mon.0) 549 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-06T23:40:06.730 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:06.419357+0000 mon.vm02 (mon.0) 550 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:40:06.730 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:06.419357+0000 mon.vm02 (mon.0) 550 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:40:06.730 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:06.420328+0000 mon.vm07 (mon.1) 20 : audit [INF] from='osd.6 [v2:192.168.123.107:6824/2542357446,v1:192.168.123.107:6825/2542357446]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-06T23:40:06.730 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:06 vm07 bash[20848]: audit 2026-03-06T22:40:06.420328+0000 mon.vm07 (mon.1) 20 : audit [INF] from='osd.6 [v2:192.168.123.107:6824/2542357446,v1:192.168.123.107:6825/2542357446]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-06T23:40:06.896 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:40:06.978 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":19,"num_osds":8,"num_up_osds":4,"osd_up_since":1772836806,"num_in_osds":8,"osd_in_since":1772836784,"num_remapped_pgs":0} 2026-03-06T23:40:07.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:07 vm07 bash[20848]: cluster 2026-03-06T22:40:04.603338+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-06T23:40:07.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:07 vm07 bash[20848]: cluster 2026-03-06T22:40:04.603338+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-06T23:40:07.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:07 vm07 bash[20848]: cluster 2026-03-06T22:40:04.603390+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-06T23:40:07.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:07 vm07 bash[20848]: cluster 2026-03-06T22:40:04.603390+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-06T23:40:07.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:07 vm07 bash[20848]: cluster 2026-03-06T22:40:04.990450+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-06T23:40:07.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:07 vm07 bash[20848]: cluster 2026-03-06T22:40:04.990450+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-06T23:40:07.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:07 vm07 bash[20848]: cluster 2026-03-06T22:40:04.990497+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-06T23:40:07.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:07 vm07 bash[20848]: cluster 2026-03-06T22:40:04.990497+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-06T23:40:07.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:07 vm07 bash[20848]: audit 2026-03-06T22:40:06.794814+0000 mon.vm02 (mon.0) 551 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-06T23:40:07.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:07 vm07 bash[20848]: audit 2026-03-06T22:40:06.794814+0000 mon.vm02 (mon.0) 551 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-06T23:40:07.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:07 vm07 bash[20848]: audit 2026-03-06T22:40:06.891081+0000 mon.vm02 (mon.0) 552 : audit [DBG] from='client.? 192.168.123.102:0/777973159' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-06T23:40:07.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:07 vm07 bash[20848]: audit 2026-03-06T22:40:06.891081+0000 mon.vm02 (mon.0) 552 : audit [DBG] from='client.? 192.168.123.102:0/777973159' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-06T23:40:07.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:07 vm07 bash[20848]: audit 2026-03-06T22:40:07.413895+0000 mon.vm02 (mon.0) 553 : audit [INF] from='osd.4 [v2:192.168.123.102:6818/2614111495,v1:192.168.123.102:6819/2614111495]' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-06T23:40:07.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:07 vm07 bash[20848]: audit 2026-03-06T22:40:07.413895+0000 mon.vm02 (mon.0) 553 : audit [INF] from='osd.4 [v2:192.168.123.102:6818/2614111495,v1:192.168.123.102:6819/2614111495]' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-06T23:40:07.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:07 vm07 bash[20848]: audit 2026-03-06T22:40:07.414001+0000 mon.vm02 (mon.0) 554 : audit [INF] from='osd.7 [v2:192.168.123.102:6826/322732066,v1:192.168.123.102:6827/322732066]' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-06T23:40:07.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:07 vm07 bash[20848]: audit 2026-03-06T22:40:07.414001+0000 mon.vm02 (mon.0) 554 : audit [INF] from='osd.7 [v2:192.168.123.102:6826/322732066,v1:192.168.123.102:6827/322732066]' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-06T23:40:07.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:07 vm07 bash[20848]: audit 2026-03-06T22:40:07.414038+0000 mon.vm02 (mon.0) 555 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-06T23:40:07.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:07 vm07 bash[20848]: audit 2026-03-06T22:40:07.414038+0000 mon.vm02 (mon.0) 555 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-06T23:40:07.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:07 vm07 bash[20848]: audit 2026-03-06T22:40:07.414104+0000 mon.vm02 (mon.0) 556 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-06T23:40:07.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:07 vm07 bash[20848]: audit 2026-03-06T22:40:07.414104+0000 mon.vm02 (mon.0) 556 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-06T23:40:07.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:07 vm07 bash[20848]: cluster 2026-03-06T22:40:07.429810+0000 mon.vm02 (mon.0) 557 : cluster [INF] osd.5 [v2:192.168.123.107:6816/1931549358,v1:192.168.123.107:6817/1931549358] boot 2026-03-06T23:40:07.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:07 vm07 bash[20848]: cluster 2026-03-06T22:40:07.429810+0000 mon.vm02 (mon.0) 557 : cluster [INF] osd.5 [v2:192.168.123.107:6816/1931549358,v1:192.168.123.107:6817/1931549358] boot 2026-03-06T23:40:07.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:07 vm07 bash[20848]: cluster 2026-03-06T22:40:07.429880+0000 mon.vm02 (mon.0) 558 : cluster [DBG] osdmap e20: 8 total, 5 up, 8 in 2026-03-06T23:40:07.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:07 vm07 bash[20848]: cluster 2026-03-06T22:40:07.429880+0000 mon.vm02 (mon.0) 558 : cluster [DBG] osdmap e20: 8 total, 5 up, 8 in 2026-03-06T23:40:07.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:07 vm02 bash[17013]: cluster 2026-03-06T22:40:04.603338+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-06T23:40:07.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:07 vm02 bash[17013]: cluster 2026-03-06T22:40:04.603338+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-06T23:40:07.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:07 vm02 bash[17013]: cluster 2026-03-06T22:40:04.603390+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-06T23:40:07.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:07 vm02 bash[17013]: cluster 2026-03-06T22:40:04.603390+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-06T23:40:07.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:07 vm02 bash[17013]: cluster 2026-03-06T22:40:04.990450+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-06T23:40:07.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:07 vm02 bash[17013]: cluster 2026-03-06T22:40:04.990450+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-06T23:40:07.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:07 vm02 bash[17013]: cluster 2026-03-06T22:40:04.990497+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-06T23:40:07.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:07 vm02 bash[17013]: cluster 2026-03-06T22:40:04.990497+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-06T23:40:07.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:07 vm02 bash[17013]: audit 2026-03-06T22:40:06.794814+0000 mon.vm02 (mon.0) 551 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-06T23:40:07.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:07 vm02 bash[17013]: audit 2026-03-06T22:40:06.794814+0000 mon.vm02 (mon.0) 551 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-06T23:40:07.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:07 vm02 bash[17013]: audit 2026-03-06T22:40:06.891081+0000 mon.vm02 (mon.0) 552 : audit [DBG] from='client.? 192.168.123.102:0/777973159' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-06T23:40:07.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:07 vm02 bash[17013]: audit 2026-03-06T22:40:06.891081+0000 mon.vm02 (mon.0) 552 : audit [DBG] from='client.? 192.168.123.102:0/777973159' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-06T23:40:07.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:07 vm02 bash[17013]: audit 2026-03-06T22:40:07.413895+0000 mon.vm02 (mon.0) 553 : audit [INF] from='osd.4 [v2:192.168.123.102:6818/2614111495,v1:192.168.123.102:6819/2614111495]' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-06T23:40:07.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:07 vm02 bash[17013]: audit 2026-03-06T22:40:07.413895+0000 mon.vm02 (mon.0) 553 : audit [INF] from='osd.4 [v2:192.168.123.102:6818/2614111495,v1:192.168.123.102:6819/2614111495]' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-06T23:40:07.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:07 vm02 bash[17013]: audit 2026-03-06T22:40:07.414001+0000 mon.vm02 (mon.0) 554 : audit [INF] from='osd.7 [v2:192.168.123.102:6826/322732066,v1:192.168.123.102:6827/322732066]' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-06T23:40:07.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:07 vm02 bash[17013]: audit 2026-03-06T22:40:07.414001+0000 mon.vm02 (mon.0) 554 : audit [INF] from='osd.7 [v2:192.168.123.102:6826/322732066,v1:192.168.123.102:6827/322732066]' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-06T23:40:07.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:07 vm02 bash[17013]: audit 2026-03-06T22:40:07.414038+0000 mon.vm02 (mon.0) 555 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-06T23:40:07.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:07 vm02 bash[17013]: audit 2026-03-06T22:40:07.414038+0000 mon.vm02 (mon.0) 555 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-06T23:40:07.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:07 vm02 bash[17013]: audit 2026-03-06T22:40:07.414104+0000 mon.vm02 (mon.0) 556 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-06T23:40:07.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:07 vm02 bash[17013]: audit 2026-03-06T22:40:07.414104+0000 mon.vm02 (mon.0) 556 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-06T23:40:07.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:07 vm02 bash[17013]: cluster 2026-03-06T22:40:07.429810+0000 mon.vm02 (mon.0) 557 : cluster [INF] osd.5 [v2:192.168.123.107:6816/1931549358,v1:192.168.123.107:6817/1931549358] boot 2026-03-06T23:40:07.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:07 vm02 bash[17013]: cluster 2026-03-06T22:40:07.429810+0000 mon.vm02 (mon.0) 557 : cluster [INF] osd.5 [v2:192.168.123.107:6816/1931549358,v1:192.168.123.107:6817/1931549358] boot 2026-03-06T23:40:07.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:07 vm02 bash[17013]: cluster 2026-03-06T22:40:07.429880+0000 mon.vm02 (mon.0) 558 : cluster [DBG] osdmap e20: 8 total, 5 up, 8 in 2026-03-06T23:40:07.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:07 vm02 bash[17013]: cluster 2026-03-06T22:40:07.429880+0000 mon.vm02 (mon.0) 558 : cluster [DBG] osdmap e20: 8 total, 5 up, 8 in 2026-03-06T23:40:07.980 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph osd stat -f json 2026-03-06T23:40:08.649 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:08 vm07 bash[20848]: cluster 2026-03-06T22:40:05.905059+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-06T23:40:08.649 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:08 vm07 bash[20848]: cluster 2026-03-06T22:40:05.905059+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-06T23:40:08.649 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:08 vm07 bash[20848]: cluster 2026-03-06T22:40:05.905098+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-06T23:40:08.649 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:08 vm07 bash[20848]: cluster 2026-03-06T22:40:05.905098+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-06T23:40:08.649 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:08 vm07 bash[20848]: cluster 2026-03-06T22:40:06.423007+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-06T23:40:08.649 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:08 vm07 bash[20848]: cluster 2026-03-06T22:40:06.423007+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-06T23:40:08.649 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:08 vm07 bash[20848]: cluster 2026-03-06T22:40:06.423057+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-06T23:40:08.649 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:08 vm07 bash[20848]: cluster 2026-03-06T22:40:06.423057+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-06T23:40:08.649 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:08 vm07 bash[20848]: cluster 2026-03-06T22:40:06.752720+0000 mgr.vm02.opvwec (mgr.14199) 93 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 685 MiB used, 79 GiB / 80 GiB avail 2026-03-06T23:40:08.649 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:08 vm07 bash[20848]: cluster 2026-03-06T22:40:06.752720+0000 mgr.vm02.opvwec (mgr.14199) 93 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 685 MiB used, 79 GiB / 80 GiB avail 2026-03-06T23:40:08.649 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:08 vm07 bash[20848]: audit 2026-03-06T22:40:07.431293+0000 mon.vm02 (mon.0) 559 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:40:08.649 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:08 vm07 bash[20848]: audit 2026-03-06T22:40:07.431293+0000 mon.vm02 (mon.0) 559 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:40:08.649 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:08 vm07 bash[20848]: audit 2026-03-06T22:40:07.447052+0000 mon.vm02 (mon.0) 560 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:40:08.649 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:08 vm07 bash[20848]: audit 2026-03-06T22:40:07.447052+0000 mon.vm02 (mon.0) 560 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:40:08.649 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:08 vm07 bash[20848]: audit 2026-03-06T22:40:07.447147+0000 mon.vm02 (mon.0) 561 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:08.649 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:08 vm07 bash[20848]: audit 2026-03-06T22:40:07.447147+0000 mon.vm02 (mon.0) 561 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:08.649 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:08 vm07 bash[20848]: audit 2026-03-06T22:40:07.447200+0000 mon.vm02 (mon.0) 562 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:08.649 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:08 vm07 bash[20848]: audit 2026-03-06T22:40:07.447200+0000 mon.vm02 (mon.0) 562 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:08.649 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:08 vm07 bash[20848]: audit 2026-03-06T22:40:07.447276+0000 mon.vm02 (mon.0) 563 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-06T23:40:08.649 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:08 vm07 bash[20848]: audit 2026-03-06T22:40:07.447276+0000 mon.vm02 (mon.0) 563 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-06T23:40:08.649 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:08 vm07 bash[20848]: audit 2026-03-06T22:40:08.421425+0000 mon.vm02 (mon.0) 564 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:40:08.649 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:08 vm07 bash[20848]: audit 2026-03-06T22:40:08.421425+0000 mon.vm02 (mon.0) 564 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:40:08.649 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:08 vm07 bash[20848]: audit 2026-03-06T22:40:08.434177+0000 mon.vm02 (mon.0) 565 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:08.649 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:08 vm07 bash[20848]: audit 2026-03-06T22:40:08.434177+0000 mon.vm02 (mon.0) 565 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:08.649 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:08 vm07 bash[20848]: audit 2026-03-06T22:40:08.434816+0000 mon.vm02 (mon.0) 566 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:08.649 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:08 vm07 bash[20848]: audit 2026-03-06T22:40:08.434816+0000 mon.vm02 (mon.0) 566 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:08.649 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:08 vm07 bash[20848]: audit 2026-03-06T22:40:08.437961+0000 mon.vm02 (mon.0) 567 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-06T23:40:08.649 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:08 vm07 bash[20848]: audit 2026-03-06T22:40:08.437961+0000 mon.vm02 (mon.0) 567 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-06T23:40:08.649 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:08 vm07 bash[20848]: cluster 2026-03-06T22:40:08.445229+0000 mon.vm02 (mon.0) 568 : cluster [INF] osd.6 [v2:192.168.123.107:6824/2542357446,v1:192.168.123.107:6825/2542357446] boot 2026-03-06T23:40:08.649 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:08 vm07 bash[20848]: cluster 2026-03-06T22:40:08.445229+0000 mon.vm02 (mon.0) 568 : cluster [INF] osd.6 [v2:192.168.123.107:6824/2542357446,v1:192.168.123.107:6825/2542357446] boot 2026-03-06T23:40:08.649 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:08 vm07 bash[20848]: cluster 2026-03-06T22:40:08.445302+0000 mon.vm02 (mon.0) 569 : cluster [DBG] osdmap e21: 8 total, 6 up, 8 in 2026-03-06T23:40:08.649 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:08 vm07 bash[20848]: cluster 2026-03-06T22:40:08.445302+0000 mon.vm02 (mon.0) 569 : cluster [DBG] osdmap e21: 8 total, 6 up, 8 in 2026-03-06T23:40:08.649 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:08 vm07 bash[20848]: audit 2026-03-06T22:40:08.445812+0000 mon.vm02 (mon.0) 570 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:08.649 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:08 vm07 bash[20848]: audit 2026-03-06T22:40:08.445812+0000 mon.vm02 (mon.0) 570 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:08.649 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:08 vm07 bash[20848]: audit 2026-03-06T22:40:08.445867+0000 mon.vm02 (mon.0) 571 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:40:08.649 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:08 vm07 bash[20848]: audit 2026-03-06T22:40:08.445867+0000 mon.vm02 (mon.0) 571 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:40:08.649 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:08 vm07 bash[20848]: audit 2026-03-06T22:40:08.445931+0000 mon.vm02 (mon.0) 572 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:08.649 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:08 vm07 bash[20848]: audit 2026-03-06T22:40:08.445931+0000 mon.vm02 (mon.0) 572 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:08.761 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:08 vm02 bash[17013]: cluster 2026-03-06T22:40:05.905059+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-06T23:40:08.761 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:08 vm02 bash[17013]: cluster 2026-03-06T22:40:05.905059+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-06T23:40:08.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:08 vm02 bash[17013]: cluster 2026-03-06T22:40:05.905098+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-06T23:40:08.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:08 vm02 bash[17013]: cluster 2026-03-06T22:40:05.905098+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-06T23:40:08.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:08 vm02 bash[17013]: cluster 2026-03-06T22:40:06.423007+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-06T23:40:08.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:08 vm02 bash[17013]: cluster 2026-03-06T22:40:06.423007+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-06T23:40:08.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:08 vm02 bash[17013]: cluster 2026-03-06T22:40:06.423057+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-06T23:40:08.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:08 vm02 bash[17013]: cluster 2026-03-06T22:40:06.423057+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-06T23:40:08.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:08 vm02 bash[17013]: cluster 2026-03-06T22:40:06.752720+0000 mgr.vm02.opvwec (mgr.14199) 93 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 685 MiB used, 79 GiB / 80 GiB avail 2026-03-06T23:40:08.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:08 vm02 bash[17013]: cluster 2026-03-06T22:40:06.752720+0000 mgr.vm02.opvwec (mgr.14199) 93 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 685 MiB used, 79 GiB / 80 GiB avail 2026-03-06T23:40:08.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:08 vm02 bash[17013]: audit 2026-03-06T22:40:07.431293+0000 mon.vm02 (mon.0) 559 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:40:08.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:08 vm02 bash[17013]: audit 2026-03-06T22:40:07.431293+0000 mon.vm02 (mon.0) 559 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-06T23:40:08.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:08 vm02 bash[17013]: audit 2026-03-06T22:40:07.447052+0000 mon.vm02 (mon.0) 560 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:40:08.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:08 vm02 bash[17013]: audit 2026-03-06T22:40:07.447052+0000 mon.vm02 (mon.0) 560 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:40:08.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:08 vm02 bash[17013]: audit 2026-03-06T22:40:07.447147+0000 mon.vm02 (mon.0) 561 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:08.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:08 vm02 bash[17013]: audit 2026-03-06T22:40:07.447147+0000 mon.vm02 (mon.0) 561 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:08.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:08 vm02 bash[17013]: audit 2026-03-06T22:40:07.447200+0000 mon.vm02 (mon.0) 562 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:08.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:08 vm02 bash[17013]: audit 2026-03-06T22:40:07.447200+0000 mon.vm02 (mon.0) 562 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:08.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:08 vm02 bash[17013]: audit 2026-03-06T22:40:07.447276+0000 mon.vm02 (mon.0) 563 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-06T23:40:08.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:08 vm02 bash[17013]: audit 2026-03-06T22:40:07.447276+0000 mon.vm02 (mon.0) 563 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-06T23:40:08.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:08 vm02 bash[17013]: audit 2026-03-06T22:40:08.421425+0000 mon.vm02 (mon.0) 564 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:40:08.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:08 vm02 bash[17013]: audit 2026-03-06T22:40:08.421425+0000 mon.vm02 (mon.0) 564 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:40:08.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:08 vm02 bash[17013]: audit 2026-03-06T22:40:08.434177+0000 mon.vm02 (mon.0) 565 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:08.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:08 vm02 bash[17013]: audit 2026-03-06T22:40:08.434177+0000 mon.vm02 (mon.0) 565 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:08.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:08 vm02 bash[17013]: audit 2026-03-06T22:40:08.434816+0000 mon.vm02 (mon.0) 566 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:08.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:08 vm02 bash[17013]: audit 2026-03-06T22:40:08.434816+0000 mon.vm02 (mon.0) 566 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:08.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:08 vm02 bash[17013]: audit 2026-03-06T22:40:08.437961+0000 mon.vm02 (mon.0) 567 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-06T23:40:08.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:08 vm02 bash[17013]: audit 2026-03-06T22:40:08.437961+0000 mon.vm02 (mon.0) 567 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-06T23:40:08.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:08 vm02 bash[17013]: cluster 2026-03-06T22:40:08.445229+0000 mon.vm02 (mon.0) 568 : cluster [INF] osd.6 [v2:192.168.123.107:6824/2542357446,v1:192.168.123.107:6825/2542357446] boot 2026-03-06T23:40:08.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:08 vm02 bash[17013]: cluster 2026-03-06T22:40:08.445229+0000 mon.vm02 (mon.0) 568 : cluster [INF] osd.6 [v2:192.168.123.107:6824/2542357446,v1:192.168.123.107:6825/2542357446] boot 2026-03-06T23:40:08.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:08 vm02 bash[17013]: cluster 2026-03-06T22:40:08.445302+0000 mon.vm02 (mon.0) 569 : cluster [DBG] osdmap e21: 8 total, 6 up, 8 in 2026-03-06T23:40:08.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:08 vm02 bash[17013]: cluster 2026-03-06T22:40:08.445302+0000 mon.vm02 (mon.0) 569 : cluster [DBG] osdmap e21: 8 total, 6 up, 8 in 2026-03-06T23:40:08.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:08 vm02 bash[17013]: audit 2026-03-06T22:40:08.445812+0000 mon.vm02 (mon.0) 570 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:08.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:08 vm02 bash[17013]: audit 2026-03-06T22:40:08.445812+0000 mon.vm02 (mon.0) 570 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:08.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:08 vm02 bash[17013]: audit 2026-03-06T22:40:08.445867+0000 mon.vm02 (mon.0) 571 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:40:08.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:08 vm02 bash[17013]: audit 2026-03-06T22:40:08.445867+0000 mon.vm02 (mon.0) 571 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-06T23:40:08.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:08 vm02 bash[17013]: audit 2026-03-06T22:40:08.445931+0000 mon.vm02 (mon.0) 572 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:08.762 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:08 vm02 bash[17013]: audit 2026-03-06T22:40:08.445931+0000 mon.vm02 (mon.0) 572 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:09.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:09 vm07 bash[20848]: cluster 2026-03-06T22:40:06.921141+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-06T23:40:09.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:09 vm07 bash[20848]: cluster 2026-03-06T22:40:06.921141+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-06T23:40:09.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:09 vm07 bash[20848]: cluster 2026-03-06T22:40:06.921198+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-06T23:40:09.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:09 vm07 bash[20848]: cluster 2026-03-06T22:40:06.921198+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-06T23:40:09.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:09 vm07 bash[20848]: cluster 2026-03-06T22:40:07.223606+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-06T23:40:09.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:09 vm07 bash[20848]: cluster 2026-03-06T22:40:07.223606+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-06T23:40:09.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:09 vm07 bash[20848]: cluster 2026-03-06T22:40:07.223660+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-06T23:40:09.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:09 vm07 bash[20848]: cluster 2026-03-06T22:40:07.223660+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-06T23:40:09.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:09 vm07 bash[20848]: audit 2026-03-06T22:40:09.005661+0000 mon.vm02 (mon.0) 573 : audit [INF] from='osd.7 [v2:192.168.123.102:6826/322732066,v1:192.168.123.102:6827/322732066]' entity='osd.7' 2026-03-06T23:40:09.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:09 vm07 bash[20848]: audit 2026-03-06T22:40:09.005661+0000 mon.vm02 (mon.0) 573 : audit [INF] from='osd.7 [v2:192.168.123.102:6826/322732066,v1:192.168.123.102:6827/322732066]' entity='osd.7' 2026-03-06T23:40:09.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:09 vm07 bash[20848]: audit 2026-03-06T22:40:09.023450+0000 mon.vm02 (mon.0) 574 : audit [INF] from='osd.4 [v2:192.168.123.102:6818/2614111495,v1:192.168.123.102:6819/2614111495]' entity='osd.4' 2026-03-06T23:40:09.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:09 vm07 bash[20848]: audit 2026-03-06T22:40:09.023450+0000 mon.vm02 (mon.0) 574 : audit [INF] from='osd.4 [v2:192.168.123.102:6818/2614111495,v1:192.168.123.102:6819/2614111495]' entity='osd.4' 2026-03-06T23:40:09.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:09 vm07 bash[20848]: audit 2026-03-06T22:40:09.434959+0000 mon.vm02 (mon.0) 575 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:09.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:09 vm07 bash[20848]: audit 2026-03-06T22:40:09.434959+0000 mon.vm02 (mon.0) 575 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:09.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:09 vm07 bash[20848]: audit 2026-03-06T22:40:09.435311+0000 mon.vm02 (mon.0) 576 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:09.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:09 vm07 bash[20848]: audit 2026-03-06T22:40:09.435311+0000 mon.vm02 (mon.0) 576 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:09.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:09 vm07 bash[20848]: cluster 2026-03-06T22:40:09.447065+0000 mon.vm02 (mon.0) 577 : cluster [INF] osd.7 [v2:192.168.123.102:6826/322732066,v1:192.168.123.102:6827/322732066] boot 2026-03-06T23:40:09.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:09 vm07 bash[20848]: cluster 2026-03-06T22:40:09.447065+0000 mon.vm02 (mon.0) 577 : cluster [INF] osd.7 [v2:192.168.123.102:6826/322732066,v1:192.168.123.102:6827/322732066] boot 2026-03-06T23:40:09.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:09 vm07 bash[20848]: cluster 2026-03-06T22:40:09.447228+0000 mon.vm02 (mon.0) 578 : cluster [INF] osd.4 [v2:192.168.123.102:6818/2614111495,v1:192.168.123.102:6819/2614111495] boot 2026-03-06T23:40:09.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:09 vm07 bash[20848]: cluster 2026-03-06T22:40:09.447228+0000 mon.vm02 (mon.0) 578 : cluster [INF] osd.4 [v2:192.168.123.102:6818/2614111495,v1:192.168.123.102:6819/2614111495] boot 2026-03-06T23:40:09.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:09 vm07 bash[20848]: cluster 2026-03-06T22:40:09.447529+0000 mon.vm02 (mon.0) 579 : cluster [DBG] osdmap e22: 8 total, 8 up, 8 in 2026-03-06T23:40:09.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:09 vm07 bash[20848]: cluster 2026-03-06T22:40:09.447529+0000 mon.vm02 (mon.0) 579 : cluster [DBG] osdmap e22: 8 total, 8 up, 8 in 2026-03-06T23:40:09.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:09 vm07 bash[20848]: audit 2026-03-06T22:40:09.447943+0000 mon.vm02 (mon.0) 580 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:09.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:09 vm07 bash[20848]: audit 2026-03-06T22:40:09.447943+0000 mon.vm02 (mon.0) 580 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:09.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:09 vm07 bash[20848]: audit 2026-03-06T22:40:09.448137+0000 mon.vm02 (mon.0) 581 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:09.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:09 vm07 bash[20848]: audit 2026-03-06T22:40:09.448137+0000 mon.vm02 (mon.0) 581 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:09.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:09 vm02 bash[17013]: cluster 2026-03-06T22:40:06.921141+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-06T23:40:09.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:09 vm02 bash[17013]: cluster 2026-03-06T22:40:06.921141+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-06T23:40:09.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:09 vm02 bash[17013]: cluster 2026-03-06T22:40:06.921198+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-06T23:40:09.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:09 vm02 bash[17013]: cluster 2026-03-06T22:40:06.921198+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-06T23:40:09.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:09 vm02 bash[17013]: cluster 2026-03-06T22:40:07.223606+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-06T23:40:09.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:09 vm02 bash[17013]: cluster 2026-03-06T22:40:07.223606+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-06T23:40:09.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:09 vm02 bash[17013]: cluster 2026-03-06T22:40:07.223660+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-06T23:40:09.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:09 vm02 bash[17013]: cluster 2026-03-06T22:40:07.223660+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-06T23:40:09.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:09 vm02 bash[17013]: audit 2026-03-06T22:40:09.005661+0000 mon.vm02 (mon.0) 573 : audit [INF] from='osd.7 [v2:192.168.123.102:6826/322732066,v1:192.168.123.102:6827/322732066]' entity='osd.7' 2026-03-06T23:40:09.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:09 vm02 bash[17013]: audit 2026-03-06T22:40:09.005661+0000 mon.vm02 (mon.0) 573 : audit [INF] from='osd.7 [v2:192.168.123.102:6826/322732066,v1:192.168.123.102:6827/322732066]' entity='osd.7' 2026-03-06T23:40:09.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:09 vm02 bash[17013]: audit 2026-03-06T22:40:09.023450+0000 mon.vm02 (mon.0) 574 : audit [INF] from='osd.4 [v2:192.168.123.102:6818/2614111495,v1:192.168.123.102:6819/2614111495]' entity='osd.4' 2026-03-06T23:40:09.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:09 vm02 bash[17013]: audit 2026-03-06T22:40:09.023450+0000 mon.vm02 (mon.0) 574 : audit [INF] from='osd.4 [v2:192.168.123.102:6818/2614111495,v1:192.168.123.102:6819/2614111495]' entity='osd.4' 2026-03-06T23:40:09.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:09 vm02 bash[17013]: audit 2026-03-06T22:40:09.434959+0000 mon.vm02 (mon.0) 575 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:09.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:09 vm02 bash[17013]: audit 2026-03-06T22:40:09.434959+0000 mon.vm02 (mon.0) 575 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:09.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:09 vm02 bash[17013]: audit 2026-03-06T22:40:09.435311+0000 mon.vm02 (mon.0) 576 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:09.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:09 vm02 bash[17013]: audit 2026-03-06T22:40:09.435311+0000 mon.vm02 (mon.0) 576 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:09.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:09 vm02 bash[17013]: cluster 2026-03-06T22:40:09.447065+0000 mon.vm02 (mon.0) 577 : cluster [INF] osd.7 [v2:192.168.123.102:6826/322732066,v1:192.168.123.102:6827/322732066] boot 2026-03-06T23:40:09.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:09 vm02 bash[17013]: cluster 2026-03-06T22:40:09.447065+0000 mon.vm02 (mon.0) 577 : cluster [INF] osd.7 [v2:192.168.123.102:6826/322732066,v1:192.168.123.102:6827/322732066] boot 2026-03-06T23:40:09.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:09 vm02 bash[17013]: cluster 2026-03-06T22:40:09.447228+0000 mon.vm02 (mon.0) 578 : cluster [INF] osd.4 [v2:192.168.123.102:6818/2614111495,v1:192.168.123.102:6819/2614111495] boot 2026-03-06T23:40:09.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:09 vm02 bash[17013]: cluster 2026-03-06T22:40:09.447228+0000 mon.vm02 (mon.0) 578 : cluster [INF] osd.4 [v2:192.168.123.102:6818/2614111495,v1:192.168.123.102:6819/2614111495] boot 2026-03-06T23:40:09.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:09 vm02 bash[17013]: cluster 2026-03-06T22:40:09.447529+0000 mon.vm02 (mon.0) 579 : cluster [DBG] osdmap e22: 8 total, 8 up, 8 in 2026-03-06T23:40:09.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:09 vm02 bash[17013]: cluster 2026-03-06T22:40:09.447529+0000 mon.vm02 (mon.0) 579 : cluster [DBG] osdmap e22: 8 total, 8 up, 8 in 2026-03-06T23:40:09.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:09 vm02 bash[17013]: audit 2026-03-06T22:40:09.447943+0000 mon.vm02 (mon.0) 580 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:09.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:09 vm02 bash[17013]: audit 2026-03-06T22:40:09.447943+0000 mon.vm02 (mon.0) 580 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-06T23:40:09.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:09 vm02 bash[17013]: audit 2026-03-06T22:40:09.448137+0000 mon.vm02 (mon.0) 581 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:09.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:09 vm02 bash[17013]: audit 2026-03-06T22:40:09.448137+0000 mon.vm02 (mon.0) 581 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-06T23:40:10.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:10 vm07 bash[20848]: cluster 2026-03-06T22:40:08.752957+0000 mgr.vm02.opvwec (mgr.14199) 94 : cluster [DBG] pgmap v48: 1 pgs: 1 unknown; 0 B data, 856 MiB used, 99 GiB / 100 GiB avail 2026-03-06T23:40:10.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:10 vm07 bash[20848]: cluster 2026-03-06T22:40:08.752957+0000 mgr.vm02.opvwec (mgr.14199) 94 : cluster [DBG] pgmap v48: 1 pgs: 1 unknown; 0 B data, 856 MiB used, 99 GiB / 100 GiB avail 2026-03-06T23:40:10.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:10 vm07 bash[20848]: audit 2026-03-06T22:40:10.292077+0000 mon.vm02 (mon.0) 582 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:10.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:10 vm07 bash[20848]: audit 2026-03-06T22:40:10.292077+0000 mon.vm02 (mon.0) 582 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:10.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:10 vm07 bash[20848]: audit 2026-03-06T22:40:10.296902+0000 mon.vm02 (mon.0) 583 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:10.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:10 vm07 bash[20848]: audit 2026-03-06T22:40:10.296902+0000 mon.vm02 (mon.0) 583 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:10.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:10 vm07 bash[20848]: cluster 2026-03-06T22:40:10.446419+0000 mon.vm02 (mon.0) 584 : cluster [DBG] osdmap e23: 8 total, 8 up, 8 in 2026-03-06T23:40:10.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:10 vm07 bash[20848]: cluster 2026-03-06T22:40:10.446419+0000 mon.vm02 (mon.0) 584 : cluster [DBG] osdmap e23: 8 total, 8 up, 8 in 2026-03-06T23:40:10.992 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:10 vm02 bash[17013]: cluster 2026-03-06T22:40:08.752957+0000 mgr.vm02.opvwec (mgr.14199) 94 : cluster [DBG] pgmap v48: 1 pgs: 1 unknown; 0 B data, 856 MiB used, 99 GiB / 100 GiB avail 2026-03-06T23:40:10.992 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:10 vm02 bash[17013]: cluster 2026-03-06T22:40:08.752957+0000 mgr.vm02.opvwec (mgr.14199) 94 : cluster [DBG] pgmap v48: 1 pgs: 1 unknown; 0 B data, 856 MiB used, 99 GiB / 100 GiB avail 2026-03-06T23:40:10.992 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:10 vm02 bash[17013]: audit 2026-03-06T22:40:10.292077+0000 mon.vm02 (mon.0) 582 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:10.992 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:10 vm02 bash[17013]: audit 2026-03-06T22:40:10.292077+0000 mon.vm02 (mon.0) 582 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:10.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:10 vm02 bash[17013]: audit 2026-03-06T22:40:10.296902+0000 mon.vm02 (mon.0) 583 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:10.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:10 vm02 bash[17013]: audit 2026-03-06T22:40:10.296902+0000 mon.vm02 (mon.0) 583 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:10.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:10 vm02 bash[17013]: cluster 2026-03-06T22:40:10.446419+0000 mon.vm02 (mon.0) 584 : cluster [DBG] osdmap e23: 8 total, 8 up, 8 in 2026-03-06T23:40:10.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:10 vm02 bash[17013]: cluster 2026-03-06T22:40:10.446419+0000 mon.vm02 (mon.0) 584 : cluster [DBG] osdmap e23: 8 total, 8 up, 8 in 2026-03-06T23:40:11.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:11 vm07 bash[20848]: audit 2026-03-06T22:40:10.652340+0000 mon.vm02 (mon.0) 585 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:11.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:11 vm07 bash[20848]: audit 2026-03-06T22:40:10.652340+0000 mon.vm02 (mon.0) 585 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:11.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:11 vm07 bash[20848]: audit 2026-03-06T22:40:10.657101+0000 mon.vm02 (mon.0) 586 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:11.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:11 vm07 bash[20848]: audit 2026-03-06T22:40:10.657101+0000 mon.vm02 (mon.0) 586 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:11.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:11 vm07 bash[20848]: audit 2026-03-06T22:40:10.698465+0000 mon.vm02 (mon.0) 587 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:40:11.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:11 vm07 bash[20848]: audit 2026-03-06T22:40:10.698465+0000 mon.vm02 (mon.0) 587 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:40:11.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:11 vm07 bash[20848]: audit 2026-03-06T22:40:11.247189+0000 mon.vm02 (mon.0) 588 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-06T23:40:11.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:11 vm07 bash[20848]: audit 2026-03-06T22:40:11.247189+0000 mon.vm02 (mon.0) 588 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-06T23:40:11.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:11 vm07 bash[20848]: audit 2026-03-06T22:40:11.264843+0000 mon.vm02 (mon.0) 589 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-06T23:40:11.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:11 vm07 bash[20848]: audit 2026-03-06T22:40:11.264843+0000 mon.vm02 (mon.0) 589 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-06T23:40:11.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:11 vm07 bash[20848]: audit 2026-03-06T22:40:11.265083+0000 mon.vm02 (mon.0) 590 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm02"}]: dispatch 2026-03-06T23:40:11.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:11 vm07 bash[20848]: audit 2026-03-06T22:40:11.265083+0000 mon.vm02 (mon.0) 590 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm02"}]: dispatch 2026-03-06T23:40:11.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:11 vm07 bash[20848]: audit 2026-03-06T22:40:11.265235+0000 mon.vm02 (mon.0) 591 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-06T23:40:11.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:11 vm07 bash[20848]: audit 2026-03-06T22:40:11.265235+0000 mon.vm02 (mon.0) 591 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-06T23:40:11.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:11 vm07 bash[20848]: audit 2026-03-06T22:40:11.267054+0000 mon.vm02 (mon.0) 592 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm02"}]: dispatch 2026-03-06T23:40:11.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:11 vm07 bash[20848]: audit 2026-03-06T22:40:11.267054+0000 mon.vm02 (mon.0) 592 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm02"}]: dispatch 2026-03-06T23:40:11.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:11 vm07 bash[20848]: audit 2026-03-06T22:40:11.267101+0000 mon.vm02 (mon.0) 593 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-06T23:40:11.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:11 vm07 bash[20848]: audit 2026-03-06T22:40:11.267101+0000 mon.vm02 (mon.0) 593 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-06T23:40:11.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:11 vm07 bash[20848]: audit 2026-03-06T22:40:11.270501+0000 mon.vm07 (mon.1) 21 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-06T23:40:11.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:11 vm07 bash[20848]: audit 2026-03-06T22:40:11.270501+0000 mon.vm07 (mon.1) 21 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-06T23:40:11.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:11 vm07 bash[20848]: audit 2026-03-06T22:40:11.287702+0000 mon.vm07 (mon.1) 22 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-06T23:40:11.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:11 vm07 bash[20848]: audit 2026-03-06T22:40:11.287702+0000 mon.vm07 (mon.1) 22 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-06T23:40:11.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:11 vm07 bash[20848]: cluster 2026-03-06T22:40:11.451371+0000 mon.vm02 (mon.0) 594 : cluster [DBG] osdmap e24: 8 total, 8 up, 8 in 2026-03-06T23:40:11.979 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:11 vm07 bash[20848]: cluster 2026-03-06T22:40:11.451371+0000 mon.vm02 (mon.0) 594 : cluster [DBG] osdmap e24: 8 total, 8 up, 8 in 2026-03-06T23:40:11.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:11 vm02 bash[17013]: audit 2026-03-06T22:40:10.652340+0000 mon.vm02 (mon.0) 585 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:11.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:11 vm02 bash[17013]: audit 2026-03-06T22:40:10.652340+0000 mon.vm02 (mon.0) 585 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:11.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:11 vm02 bash[17013]: audit 2026-03-06T22:40:10.657101+0000 mon.vm02 (mon.0) 586 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:11.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:11 vm02 bash[17013]: audit 2026-03-06T22:40:10.657101+0000 mon.vm02 (mon.0) 586 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:11.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:11 vm02 bash[17013]: audit 2026-03-06T22:40:10.698465+0000 mon.vm02 (mon.0) 587 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:40:11.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:11 vm02 bash[17013]: audit 2026-03-06T22:40:10.698465+0000 mon.vm02 (mon.0) 587 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:40:11.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:11 vm02 bash[17013]: audit 2026-03-06T22:40:11.247189+0000 mon.vm02 (mon.0) 588 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-06T23:40:11.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:11 vm02 bash[17013]: audit 2026-03-06T22:40:11.247189+0000 mon.vm02 (mon.0) 588 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-06T23:40:11.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:11 vm02 bash[17013]: audit 2026-03-06T22:40:11.264843+0000 mon.vm02 (mon.0) 589 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-06T23:40:11.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:11 vm02 bash[17013]: audit 2026-03-06T22:40:11.264843+0000 mon.vm02 (mon.0) 589 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-06T23:40:11.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:11 vm02 bash[17013]: audit 2026-03-06T22:40:11.265083+0000 mon.vm02 (mon.0) 590 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm02"}]: dispatch 2026-03-06T23:40:11.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:11 vm02 bash[17013]: audit 2026-03-06T22:40:11.265083+0000 mon.vm02 (mon.0) 590 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm02"}]: dispatch 2026-03-06T23:40:11.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:11 vm02 bash[17013]: audit 2026-03-06T22:40:11.265235+0000 mon.vm02 (mon.0) 591 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-06T23:40:11.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:11 vm02 bash[17013]: audit 2026-03-06T22:40:11.265235+0000 mon.vm02 (mon.0) 591 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-06T23:40:11.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:11 vm02 bash[17013]: audit 2026-03-06T22:40:11.267054+0000 mon.vm02 (mon.0) 592 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm02"}]: dispatch 2026-03-06T23:40:11.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:11 vm02 bash[17013]: audit 2026-03-06T22:40:11.267054+0000 mon.vm02 (mon.0) 592 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm02"}]: dispatch 2026-03-06T23:40:11.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:11 vm02 bash[17013]: audit 2026-03-06T22:40:11.267101+0000 mon.vm02 (mon.0) 593 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-06T23:40:11.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:11 vm02 bash[17013]: audit 2026-03-06T22:40:11.267101+0000 mon.vm02 (mon.0) 593 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "mon metadata", "id": "vm07"}]: dispatch 2026-03-06T23:40:11.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:11 vm02 bash[17013]: audit 2026-03-06T22:40:11.270501+0000 mon.vm07 (mon.1) 21 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-06T23:40:11.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:11 vm02 bash[17013]: audit 2026-03-06T22:40:11.270501+0000 mon.vm07 (mon.1) 21 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-06T23:40:11.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:11 vm02 bash[17013]: audit 2026-03-06T22:40:11.287702+0000 mon.vm07 (mon.1) 22 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-06T23:40:11.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:11 vm02 bash[17013]: audit 2026-03-06T22:40:11.287702+0000 mon.vm07 (mon.1) 22 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-06T23:40:11.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:11 vm02 bash[17013]: cluster 2026-03-06T22:40:11.451371+0000 mon.vm02 (mon.0) 594 : cluster [DBG] osdmap e24: 8 total, 8 up, 8 in 2026-03-06T23:40:11.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:11 vm02 bash[17013]: cluster 2026-03-06T22:40:11.451371+0000 mon.vm02 (mon.0) 594 : cluster [DBG] osdmap e24: 8 total, 8 up, 8 in 2026-03-06T23:40:12.591 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:40:12.953 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:40:12.964 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:12 vm02 bash[17013]: cluster 2026-03-06T22:40:10.753178+0000 mgr.vm02.opvwec (mgr.14199) 95 : cluster [DBG] pgmap v51: 1 pgs: 1 creating+peering; 0 B data, 1.7 GiB used, 158 GiB / 160 GiB avail 2026-03-06T23:40:12.964 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:12 vm02 bash[17013]: cluster 2026-03-06T22:40:10.753178+0000 mgr.vm02.opvwec (mgr.14199) 95 : cluster [DBG] pgmap v51: 1 pgs: 1 creating+peering; 0 B data, 1.7 GiB used, 158 GiB / 160 GiB avail 2026-03-06T23:40:12.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:12 vm07 bash[20848]: cluster 2026-03-06T22:40:10.753178+0000 mgr.vm02.opvwec (mgr.14199) 95 : cluster [DBG] pgmap v51: 1 pgs: 1 creating+peering; 0 B data, 1.7 GiB used, 158 GiB / 160 GiB avail 2026-03-06T23:40:12.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:12 vm07 bash[20848]: cluster 2026-03-06T22:40:10.753178+0000 mgr.vm02.opvwec (mgr.14199) 95 : cluster [DBG] pgmap v51: 1 pgs: 1 creating+peering; 0 B data, 1.7 GiB used, 158 GiB / 160 GiB avail 2026-03-06T23:40:13.016 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":24,"num_osds":8,"num_up_osds":8,"osd_up_since":1772836809,"num_in_osds":8,"osd_in_since":1772836784,"num_remapped_pgs":0} 2026-03-06T23:40:13.016 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph osd dump --format=json 2026-03-06T23:40:13.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:13 vm07 bash[20848]: audit 2026-03-06T22:40:12.948506+0000 mon.vm02 (mon.0) 595 : audit [DBG] from='client.? 192.168.123.102:0/1518865227' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-06T23:40:13.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:13 vm07 bash[20848]: audit 2026-03-06T22:40:12.948506+0000 mon.vm02 (mon.0) 595 : audit [DBG] from='client.? 192.168.123.102:0/1518865227' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-06T23:40:13.992 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:13 vm02 bash[17013]: audit 2026-03-06T22:40:12.948506+0000 mon.vm02 (mon.0) 595 : audit [DBG] from='client.? 192.168.123.102:0/1518865227' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-06T23:40:13.992 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:13 vm02 bash[17013]: audit 2026-03-06T22:40:12.948506+0000 mon.vm02 (mon.0) 595 : audit [DBG] from='client.? 192.168.123.102:0/1518865227' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-06T23:40:14.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:14 vm07 bash[20848]: cluster 2026-03-06T22:40:12.753447+0000 mgr.vm02.opvwec (mgr.14199) 96 : cluster [DBG] pgmap v53: 1 pgs: 1 creating+peering; 0 B data, 1.4 GiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:14.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:14 vm07 bash[20848]: cluster 2026-03-06T22:40:12.753447+0000 mgr.vm02.opvwec (mgr.14199) 96 : cluster [DBG] pgmap v53: 1 pgs: 1 creating+peering; 0 B data, 1.4 GiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:14.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:14 vm07 bash[20848]: cluster 2026-03-06T22:40:13.689401+0000 mon.vm02 (mon.0) 596 : cluster [DBG] mgrmap e19: vm02.opvwec(active, since 82s), standbys: vm07.jbleen 2026-03-06T23:40:14.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:14 vm07 bash[20848]: cluster 2026-03-06T22:40:13.689401+0000 mon.vm02 (mon.0) 596 : cluster [DBG] mgrmap e19: vm02.opvwec(active, since 82s), standbys: vm07.jbleen 2026-03-06T23:40:14.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:14 vm02 bash[17013]: cluster 2026-03-06T22:40:12.753447+0000 mgr.vm02.opvwec (mgr.14199) 96 : cluster [DBG] pgmap v53: 1 pgs: 1 creating+peering; 0 B data, 1.4 GiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:14.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:14 vm02 bash[17013]: cluster 2026-03-06T22:40:12.753447+0000 mgr.vm02.opvwec (mgr.14199) 96 : cluster [DBG] pgmap v53: 1 pgs: 1 creating+peering; 0 B data, 1.4 GiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:14.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:14 vm02 bash[17013]: cluster 2026-03-06T22:40:13.689401+0000 mon.vm02 (mon.0) 596 : cluster [DBG] mgrmap e19: vm02.opvwec(active, since 82s), standbys: vm07.jbleen 2026-03-06T23:40:14.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:14 vm02 bash[17013]: cluster 2026-03-06T22:40:13.689401+0000 mon.vm02 (mon.0) 596 : cluster [DBG] mgrmap e19: vm02.opvwec(active, since 82s), standbys: vm07.jbleen 2026-03-06T23:40:16.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:16 vm07 bash[20848]: cluster 2026-03-06T22:40:14.753807+0000 mgr.vm02.opvwec (mgr.14199) 97 : cluster [DBG] pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 1012 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:16.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:16 vm07 bash[20848]: cluster 2026-03-06T22:40:14.753807+0000 mgr.vm02.opvwec (mgr.14199) 97 : cluster [DBG] pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 1012 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:16.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:16 vm07 bash[20848]: audit 2026-03-06T22:40:15.269223+0000 mon.vm02 (mon.0) 597 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:16.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:16 vm07 bash[20848]: audit 2026-03-06T22:40:15.269223+0000 mon.vm02 (mon.0) 597 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:16.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:16 vm07 bash[20848]: audit 2026-03-06T22:40:15.273564+0000 mon.vm02 (mon.0) 598 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:16.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:16 vm07 bash[20848]: audit 2026-03-06T22:40:15.273564+0000 mon.vm02 (mon.0) 598 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:16.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:16 vm07 bash[20848]: audit 2026-03-06T22:40:15.772772+0000 mon.vm02 (mon.0) 599 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:16.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:16 vm07 bash[20848]: audit 2026-03-06T22:40:15.772772+0000 mon.vm02 (mon.0) 599 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:16.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:16 vm07 bash[20848]: audit 2026-03-06T22:40:15.777024+0000 mon.vm02 (mon.0) 600 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:16.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:16 vm07 bash[20848]: audit 2026-03-06T22:40:15.777024+0000 mon.vm02 (mon.0) 600 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:16.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:16 vm02 bash[17013]: cluster 2026-03-06T22:40:14.753807+0000 mgr.vm02.opvwec (mgr.14199) 97 : cluster [DBG] pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 1012 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:16.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:16 vm02 bash[17013]: cluster 2026-03-06T22:40:14.753807+0000 mgr.vm02.opvwec (mgr.14199) 97 : cluster [DBG] pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 1012 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:16.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:16 vm02 bash[17013]: audit 2026-03-06T22:40:15.269223+0000 mon.vm02 (mon.0) 597 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:16.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:16 vm02 bash[17013]: audit 2026-03-06T22:40:15.269223+0000 mon.vm02 (mon.0) 597 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:16.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:16 vm02 bash[17013]: audit 2026-03-06T22:40:15.273564+0000 mon.vm02 (mon.0) 598 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:16.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:16 vm02 bash[17013]: audit 2026-03-06T22:40:15.273564+0000 mon.vm02 (mon.0) 598 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:16.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:16 vm02 bash[17013]: audit 2026-03-06T22:40:15.772772+0000 mon.vm02 (mon.0) 599 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:16.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:16 vm02 bash[17013]: audit 2026-03-06T22:40:15.772772+0000 mon.vm02 (mon.0) 599 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:16.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:16 vm02 bash[17013]: audit 2026-03-06T22:40:15.777024+0000 mon.vm02 (mon.0) 600 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:16.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:16 vm02 bash[17013]: audit 2026-03-06T22:40:15.777024+0000 mon.vm02 (mon.0) 600 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:17.634 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:40:17.979 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:40:17.979 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":24,"fsid":"f8b8c16a-19ac-11f1-87e7-9b7402b99c44","created":"2026-03-06T22:37:19.213016+0000","modified":"2026-03-06T22:40:11.444437+0000","last_up_change":"2026-03-06T22:40:09.437893+0000","last_in_change":"2026-03-06T22:39:44.682915+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":9,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-06T22:40:06.797344+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"24","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"418e828c-709f-40ee-9849-890589b82337","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":17,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6800","nonce":1132330073},{"type":"v1","addr":"192.168.123.107:6801","nonce":1132330073}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6802","nonce":1132330073},{"type":"v1","addr":"192.168.123.107:6803","nonce":1132330073}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6806","nonce":1132330073},{"type":"v1","addr":"192.168.123.107:6807","nonce":1132330073}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6804","nonce":1132330073},{"type":"v1","addr":"192.168.123.107:6805","nonce":1132330073}]},"public_addr":"192.168.123.107:6801/1132330073","cluster_addr":"192.168.123.107:6803/1132330073","heartbeat_back_addr":"192.168.123.107:6807/1132330073","heartbeat_front_addr":"192.168.123.107:6805/1132330073","state":["exists","up"]},{"osd":1,"uuid":"ace25c81-45bd-4eb3-b02f-ff194f355af7","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":16,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6802","nonce":1450039054},{"type":"v1","addr":"192.168.123.102:6803","nonce":1450039054}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6804","nonce":1450039054},{"type":"v1","addr":"192.168.123.102:6805","nonce":1450039054}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6808","nonce":1450039054},{"type":"v1","addr":"192.168.123.102:6809","nonce":1450039054}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6806","nonce":1450039054},{"type":"v1","addr":"192.168.123.102:6807","nonce":1450039054}]},"public_addr":"192.168.123.102:6803/1450039054","cluster_addr":"192.168.123.102:6805/1450039054","heartbeat_back_addr":"192.168.123.102:6809/1450039054","heartbeat_front_addr":"192.168.123.102:6807/1450039054","state":["exists","up"]},{"osd":2,"uuid":"5e438db1-97e7-4551-a2a8-5b5117692f52","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":19,"up_thru":20,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6808","nonce":3130883557},{"type":"v1","addr":"192.168.123.107:6809","nonce":3130883557}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6810","nonce":3130883557},{"type":"v1","addr":"192.168.123.107:6811","nonce":3130883557}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6814","nonce":3130883557},{"type":"v1","addr":"192.168.123.107:6815","nonce":3130883557}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6812","nonce":3130883557},{"type":"v1","addr":"192.168.123.107:6813","nonce":3130883557}]},"public_addr":"192.168.123.107:6809/3130883557","cluster_addr":"192.168.123.107:6811/3130883557","heartbeat_back_addr":"192.168.123.107:6815/3130883557","heartbeat_front_addr":"192.168.123.107:6813/3130883557","state":["exists","up"]},{"osd":3,"uuid":"0b2838af-d2fd-47a1-a00c-95a72f13f66a","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":19,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6810","nonce":1555906781},{"type":"v1","addr":"192.168.123.102:6811","nonce":1555906781}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6812","nonce":1555906781},{"type":"v1","addr":"192.168.123.102:6813","nonce":1555906781}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6816","nonce":1555906781},{"type":"v1","addr":"192.168.123.102:6817","nonce":1555906781}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6814","nonce":1555906781},{"type":"v1","addr":"192.168.123.102:6815","nonce":1555906781}]},"public_addr":"192.168.123.102:6811/1555906781","cluster_addr":"192.168.123.102:6813/1555906781","heartbeat_back_addr":"192.168.123.102:6817/1555906781","heartbeat_front_addr":"192.168.123.102:6815/1555906781","state":["exists","up"]},{"osd":4,"uuid":"8af0222d-7b05-4f10-a678-5f0008c2f8f8","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":22,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6818","nonce":2614111495},{"type":"v1","addr":"192.168.123.102:6819","nonce":2614111495}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6820","nonce":2614111495},{"type":"v1","addr":"192.168.123.102:6821","nonce":2614111495}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6824","nonce":2614111495},{"type":"v1","addr":"192.168.123.102:6825","nonce":2614111495}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6822","nonce":2614111495},{"type":"v1","addr":"192.168.123.102:6823","nonce":2614111495}]},"public_addr":"192.168.123.102:6819/2614111495","cluster_addr":"192.168.123.102:6821/2614111495","heartbeat_back_addr":"192.168.123.102:6825/2614111495","heartbeat_front_addr":"192.168.123.102:6823/2614111495","state":["exists","up"]},{"osd":5,"uuid":"e23d8375-e171-457c-a818-baefaf27ce5c","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":20,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6816","nonce":1931549358},{"type":"v1","addr":"192.168.123.107:6817","nonce":1931549358}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6818","nonce":1931549358},{"type":"v1","addr":"192.168.123.107:6819","nonce":1931549358}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6822","nonce":1931549358},{"type":"v1","addr":"192.168.123.107:6823","nonce":1931549358}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6820","nonce":1931549358},{"type":"v1","addr":"192.168.123.107:6821","nonce":1931549358}]},"public_addr":"192.168.123.107:6817/1931549358","cluster_addr":"192.168.123.107:6819/1931549358","heartbeat_back_addr":"192.168.123.107:6823/1931549358","heartbeat_front_addr":"192.168.123.107:6821/1931549358","state":["exists","up"]},{"osd":6,"uuid":"323a807a-94bd-4543-a9ad-add56a77e9da","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":21,"up_thru":22,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6824","nonce":2542357446},{"type":"v1","addr":"192.168.123.107:6825","nonce":2542357446}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6826","nonce":2542357446},{"type":"v1","addr":"192.168.123.107:6827","nonce":2542357446}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6830","nonce":2542357446},{"type":"v1","addr":"192.168.123.107:6831","nonce":2542357446}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6828","nonce":2542357446},{"type":"v1","addr":"192.168.123.107:6829","nonce":2542357446}]},"public_addr":"192.168.123.107:6825/2542357446","cluster_addr":"192.168.123.107:6827/2542357446","heartbeat_back_addr":"192.168.123.107:6831/2542357446","heartbeat_front_addr":"192.168.123.107:6829/2542357446","state":["exists","up"]},{"osd":7,"uuid":"6a439063-03a8-4958-811b-6a2933fe0919","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":22,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6826","nonce":322732066},{"type":"v1","addr":"192.168.123.102:6827","nonce":322732066}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6828","nonce":322732066},{"type":"v1","addr":"192.168.123.102:6829","nonce":322732066}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6832","nonce":322732066},{"type":"v1","addr":"192.168.123.102:6833","nonce":322732066}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6830","nonce":322732066},{"type":"v1","addr":"192.168.123.102:6831","nonce":322732066}]},"public_addr":"192.168.123.102:6827/322732066","cluster_addr":"192.168.123.102:6829/322732066","heartbeat_back_addr":"192.168.123.102:6833/322732066","heartbeat_front_addr":"192.168.123.102:6831/322732066","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-06T22:40:02.350162+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-06T22:40:01.875501+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-06T22:40:04.990498+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-06T22:40:04.603391+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-06T22:40:06.423059+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-06T22:40:05.905099+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-06T22:40:06.921201+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-06T22:40:07.223661+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.102:0/751966813":"2026-03-07T22:37:44.254116+0000","192.168.123.102:0/1535656930":"2026-03-07T22:38:50.732745+0000","192.168.123.102:6800/58349791":"2026-03-07T22:38:50.732745+0000","192.168.123.102:6800/504586972":"2026-03-07T22:37:44.254116+0000","192.168.123.102:0/846830622":"2026-03-07T22:38:05.493365+0000","192.168.123.102:0/575637168":"2026-03-07T22:38:05.493365+0000","192.168.123.102:0/106909585":"2026-03-07T22:38:50.732745+0000","192.168.123.102:6801/504586972":"2026-03-07T22:37:44.254116+0000","192.168.123.102:0/3124113404":"2026-03-07T22:37:44.254116+0000","192.168.123.102:0/2969876558":"2026-03-07T22:37:44.254116+0000","192.168.123.102:0/3545077305":"2026-03-07T22:38:05.493365+0000","192.168.123.102:6800/2494222457":"2026-03-07T22:38:05.493365+0000","192.168.123.102:6801/58349791":"2026-03-07T22:38:50.732745+0000","192.168.123.102:6801/2494222457":"2026-03-07T22:38:05.493365+0000","192.168.123.102:0/3879353166":"2026-03-07T22:38:50.732745+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-06T23:40:18.054 INFO:tasks.cephadm.ceph_manager.ceph:[{'pool': 1, 'pool_name': '.mgr', 'create_time': '2026-03-06T22:40:06.797344+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'is_stretch_pool': False, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '24', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_type': 'Fair distribution', 'score_acting': 7.889999866485596, 'score_stable': 7.889999866485596, 'optimal_score': 0.3799999952316284, 'raw_score_acting': 3, 'raw_score_stable': 3, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}] 2026-03-06T23:40:18.055 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph osd pool get .mgr pg_num 2026-03-06T23:40:18.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:18 vm02 bash[17013]: cluster 2026-03-06T22:40:16.754050+0000 mgr.vm02.opvwec (mgr.14199) 98 : cluster [DBG] pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:18.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:18 vm02 bash[17013]: cluster 2026-03-06T22:40:16.754050+0000 mgr.vm02.opvwec (mgr.14199) 98 : cluster [DBG] pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:18.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:18 vm02 bash[17013]: audit 2026-03-06T22:40:17.974167+0000 mon.vm02 (mon.0) 601 : audit [DBG] from='client.? 192.168.123.102:0/1812911880' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-06T23:40:18.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:18 vm02 bash[17013]: audit 2026-03-06T22:40:17.974167+0000 mon.vm02 (mon.0) 601 : audit [DBG] from='client.? 192.168.123.102:0/1812911880' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-06T23:40:18.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:18 vm07 bash[20848]: cluster 2026-03-06T22:40:16.754050+0000 mgr.vm02.opvwec (mgr.14199) 98 : cluster [DBG] pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:18.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:18 vm07 bash[20848]: cluster 2026-03-06T22:40:16.754050+0000 mgr.vm02.opvwec (mgr.14199) 98 : cluster [DBG] pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:18.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:18 vm07 bash[20848]: audit 2026-03-06T22:40:17.974167+0000 mon.vm02 (mon.0) 601 : audit [DBG] from='client.? 192.168.123.102:0/1812911880' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-06T23:40:18.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:18 vm07 bash[20848]: audit 2026-03-06T22:40:17.974167+0000 mon.vm02 (mon.0) 601 : audit [DBG] from='client.? 192.168.123.102:0/1812911880' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-06T23:40:20.645 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:20 vm02 bash[17013]: cluster 2026-03-06T22:40:18.754298+0000 mgr.vm02.opvwec (mgr.14199) 99 : cluster [DBG] pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:20.645 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:20 vm02 bash[17013]: cluster 2026-03-06T22:40:18.754298+0000 mgr.vm02.opvwec (mgr.14199) 99 : cluster [DBG] pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:20.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:20 vm07 bash[20848]: cluster 2026-03-06T22:40:18.754298+0000 mgr.vm02.opvwec (mgr.14199) 99 : cluster [DBG] pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:20.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:20 vm07 bash[20848]: cluster 2026-03-06T22:40:18.754298+0000 mgr.vm02.opvwec (mgr.14199) 99 : cluster [DBG] pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:21.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:21 vm07 bash[20848]: audit 2026-03-06T22:40:20.799820+0000 mon.vm02 (mon.0) 602 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:40:21.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:21 vm07 bash[20848]: audit 2026-03-06T22:40:20.799820+0000 mon.vm02 (mon.0) 602 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:40:21.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:21 vm07 bash[20848]: audit 2026-03-06T22:40:20.954473+0000 mon.vm02 (mon.0) 603 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:21.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:21 vm07 bash[20848]: audit 2026-03-06T22:40:20.954473+0000 mon.vm02 (mon.0) 603 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:21.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:21 vm07 bash[20848]: audit 2026-03-06T22:40:20.959030+0000 mon.vm02 (mon.0) 604 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:21.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:21 vm07 bash[20848]: audit 2026-03-06T22:40:20.959030+0000 mon.vm02 (mon.0) 604 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:21.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:21 vm07 bash[20848]: audit 2026-03-06T22:40:20.960122+0000 mon.vm02 (mon.0) 605 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-06T23:40:21.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:21 vm07 bash[20848]: audit 2026-03-06T22:40:20.960122+0000 mon.vm02 (mon.0) 605 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-06T23:40:21.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:21 vm02 bash[17013]: audit 2026-03-06T22:40:20.799820+0000 mon.vm02 (mon.0) 602 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:40:21.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:21 vm02 bash[17013]: audit 2026-03-06T22:40:20.799820+0000 mon.vm02 (mon.0) 602 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:40:21.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:21 vm02 bash[17013]: audit 2026-03-06T22:40:20.954473+0000 mon.vm02 (mon.0) 603 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:21.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:21 vm02 bash[17013]: audit 2026-03-06T22:40:20.954473+0000 mon.vm02 (mon.0) 603 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:21.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:21 vm02 bash[17013]: audit 2026-03-06T22:40:20.959030+0000 mon.vm02 (mon.0) 604 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:21.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:21 vm02 bash[17013]: audit 2026-03-06T22:40:20.959030+0000 mon.vm02 (mon.0) 604 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:21.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:21 vm02 bash[17013]: audit 2026-03-06T22:40:20.960122+0000 mon.vm02 (mon.0) 605 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-06T23:40:21.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:21 vm02 bash[17013]: audit 2026-03-06T22:40:20.960122+0000 mon.vm02 (mon.0) 605 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-06T23:40:22.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:22 vm07 bash[20848]: cluster 2026-03-06T22:40:20.754539+0000 mgr.vm02.opvwec (mgr.14199) 100 : cluster [DBG] pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:22.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:22 vm07 bash[20848]: cluster 2026-03-06T22:40:20.754539+0000 mgr.vm02.opvwec (mgr.14199) 100 : cluster [DBG] pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:22.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:22 vm07 bash[20848]: cephadm 2026-03-06T22:40:20.944489+0000 mgr.vm02.opvwec (mgr.14199) 101 : cephadm [INF] Detected new or changed devices on vm02 2026-03-06T23:40:22.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:22 vm07 bash[20848]: cephadm 2026-03-06T22:40:20.944489+0000 mgr.vm02.opvwec (mgr.14199) 101 : cephadm [INF] Detected new or changed devices on vm02 2026-03-06T23:40:22.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:22 vm07 bash[20848]: audit 2026-03-06T22:40:22.042246+0000 mon.vm02 (mon.0) 606 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:22.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:22 vm07 bash[20848]: audit 2026-03-06T22:40:22.042246+0000 mon.vm02 (mon.0) 606 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:22.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:22 vm07 bash[20848]: audit 2026-03-06T22:40:22.047112+0000 mon.vm02 (mon.0) 607 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:22.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:22 vm07 bash[20848]: audit 2026-03-06T22:40:22.047112+0000 mon.vm02 (mon.0) 607 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:22.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:22 vm07 bash[20848]: audit 2026-03-06T22:40:22.048076+0000 mon.vm02 (mon.0) 608 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-06T23:40:22.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:22 vm07 bash[20848]: audit 2026-03-06T22:40:22.048076+0000 mon.vm02 (mon.0) 608 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-06T23:40:22.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:22 vm07 bash[20848]: audit 2026-03-06T22:40:22.048882+0000 mon.vm02 (mon.0) 609 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:40:22.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:22 vm07 bash[20848]: audit 2026-03-06T22:40:22.048882+0000 mon.vm02 (mon.0) 609 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:40:22.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:22 vm07 bash[20848]: audit 2026-03-06T22:40:22.049377+0000 mon.vm02 (mon.0) 610 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T23:40:22.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:22 vm07 bash[20848]: audit 2026-03-06T22:40:22.049377+0000 mon.vm02 (mon.0) 610 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T23:40:22.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:22 vm07 bash[20848]: audit 2026-03-06T22:40:22.053392+0000 mon.vm02 (mon.0) 611 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:22.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:22 vm07 bash[20848]: audit 2026-03-06T22:40:22.053392+0000 mon.vm02 (mon.0) 611 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:22.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:22 vm07 bash[20848]: audit 2026-03-06T22:40:22.055072+0000 mon.vm02 (mon.0) 612 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-06T23:40:22.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:22 vm07 bash[20848]: audit 2026-03-06T22:40:22.055072+0000 mon.vm02 (mon.0) 612 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-06T23:40:22.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:22 vm02 bash[17013]: cluster 2026-03-06T22:40:20.754539+0000 mgr.vm02.opvwec (mgr.14199) 100 : cluster [DBG] pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:22.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:22 vm02 bash[17013]: cluster 2026-03-06T22:40:20.754539+0000 mgr.vm02.opvwec (mgr.14199) 100 : cluster [DBG] pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:22.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:22 vm02 bash[17013]: cephadm 2026-03-06T22:40:20.944489+0000 mgr.vm02.opvwec (mgr.14199) 101 : cephadm [INF] Detected new or changed devices on vm02 2026-03-06T23:40:22.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:22 vm02 bash[17013]: cephadm 2026-03-06T22:40:20.944489+0000 mgr.vm02.opvwec (mgr.14199) 101 : cephadm [INF] Detected new or changed devices on vm02 2026-03-06T23:40:22.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:22 vm02 bash[17013]: audit 2026-03-06T22:40:22.042246+0000 mon.vm02 (mon.0) 606 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:22.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:22 vm02 bash[17013]: audit 2026-03-06T22:40:22.042246+0000 mon.vm02 (mon.0) 606 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:22.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:22 vm02 bash[17013]: audit 2026-03-06T22:40:22.047112+0000 mon.vm02 (mon.0) 607 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:22.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:22 vm02 bash[17013]: audit 2026-03-06T22:40:22.047112+0000 mon.vm02 (mon.0) 607 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:22.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:22 vm02 bash[17013]: audit 2026-03-06T22:40:22.048076+0000 mon.vm02 (mon.0) 608 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-06T23:40:22.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:22 vm02 bash[17013]: audit 2026-03-06T22:40:22.048076+0000 mon.vm02 (mon.0) 608 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-06T23:40:22.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:22 vm02 bash[17013]: audit 2026-03-06T22:40:22.048882+0000 mon.vm02 (mon.0) 609 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:40:22.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:22 vm02 bash[17013]: audit 2026-03-06T22:40:22.048882+0000 mon.vm02 (mon.0) 609 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:40:22.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:22 vm02 bash[17013]: audit 2026-03-06T22:40:22.049377+0000 mon.vm02 (mon.0) 610 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T23:40:22.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:22 vm02 bash[17013]: audit 2026-03-06T22:40:22.049377+0000 mon.vm02 (mon.0) 610 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T23:40:22.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:22 vm02 bash[17013]: audit 2026-03-06T22:40:22.053392+0000 mon.vm02 (mon.0) 611 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:22.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:22 vm02 bash[17013]: audit 2026-03-06T22:40:22.053392+0000 mon.vm02 (mon.0) 611 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:40:22.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:22 vm02 bash[17013]: audit 2026-03-06T22:40:22.055072+0000 mon.vm02 (mon.0) 612 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-06T23:40:22.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:22 vm02 bash[17013]: audit 2026-03-06T22:40:22.055072+0000 mon.vm02 (mon.0) 612 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-06T23:40:23.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:23 vm07 bash[20848]: cephadm 2026-03-06T22:40:22.036335+0000 mgr.vm02.opvwec (mgr.14199) 102 : cephadm [INF] Detected new or changed devices on vm07 2026-03-06T23:40:23.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:23 vm07 bash[20848]: cephadm 2026-03-06T22:40:22.036335+0000 mgr.vm02.opvwec (mgr.14199) 102 : cephadm [INF] Detected new or changed devices on vm07 2026-03-06T23:40:23.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:23 vm02 bash[17013]: cephadm 2026-03-06T22:40:22.036335+0000 mgr.vm02.opvwec (mgr.14199) 102 : cephadm [INF] Detected new or changed devices on vm07 2026-03-06T23:40:23.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:23 vm02 bash[17013]: cephadm 2026-03-06T22:40:22.036335+0000 mgr.vm02.opvwec (mgr.14199) 102 : cephadm [INF] Detected new or changed devices on vm07 2026-03-06T23:40:23.951 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:40:24.307 INFO:teuthology.orchestra.run.vm02.stdout:pg_num: 1 2026-03-06T23:40:24.370 INFO:tasks.cephadm:Setting up client nodes... 2026-03-06T23:40:24.370 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph auth get-or-create client.0 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-06T23:40:24.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:24 vm07 bash[20848]: cluster 2026-03-06T22:40:22.754799+0000 mgr.vm02.opvwec (mgr.14199) 103 : cluster [DBG] pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:24.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:24 vm07 bash[20848]: cluster 2026-03-06T22:40:22.754799+0000 mgr.vm02.opvwec (mgr.14199) 103 : cluster [DBG] pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:24.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:24 vm02 bash[17013]: cluster 2026-03-06T22:40:22.754799+0000 mgr.vm02.opvwec (mgr.14199) 103 : cluster [DBG] pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:24.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:24 vm02 bash[17013]: cluster 2026-03-06T22:40:22.754799+0000 mgr.vm02.opvwec (mgr.14199) 103 : cluster [DBG] pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:25.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:25 vm07 bash[20848]: audit 2026-03-06T22:40:24.302466+0000 mon.vm02 (mon.0) 613 : audit [DBG] from='client.? 192.168.123.102:0/2832363255' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-06T23:40:25.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:25 vm07 bash[20848]: audit 2026-03-06T22:40:24.302466+0000 mon.vm02 (mon.0) 613 : audit [DBG] from='client.? 192.168.123.102:0/2832363255' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-06T23:40:25.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:25 vm02 bash[17013]: audit 2026-03-06T22:40:24.302466+0000 mon.vm02 (mon.0) 613 : audit [DBG] from='client.? 192.168.123.102:0/2832363255' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-06T23:40:25.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:25 vm02 bash[17013]: audit 2026-03-06T22:40:24.302466+0000 mon.vm02 (mon.0) 613 : audit [DBG] from='client.? 192.168.123.102:0/2832363255' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-06T23:40:26.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:26 vm07 bash[20848]: cluster 2026-03-06T22:40:24.755124+0000 mgr.vm02.opvwec (mgr.14199) 104 : cluster [DBG] pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:26.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:26 vm07 bash[20848]: cluster 2026-03-06T22:40:24.755124+0000 mgr.vm02.opvwec (mgr.14199) 104 : cluster [DBG] pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:26.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:26 vm02 bash[17013]: cluster 2026-03-06T22:40:24.755124+0000 mgr.vm02.opvwec (mgr.14199) 104 : cluster [DBG] pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:26.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:26 vm02 bash[17013]: cluster 2026-03-06T22:40:24.755124+0000 mgr.vm02.opvwec (mgr.14199) 104 : cluster [DBG] pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:28.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:28 vm07 bash[20848]: cluster 2026-03-06T22:40:26.755367+0000 mgr.vm02.opvwec (mgr.14199) 105 : cluster [DBG] pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:28.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:28 vm07 bash[20848]: cluster 2026-03-06T22:40:26.755367+0000 mgr.vm02.opvwec (mgr.14199) 105 : cluster [DBG] pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:28.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:28 vm02 bash[17013]: cluster 2026-03-06T22:40:26.755367+0000 mgr.vm02.opvwec (mgr.14199) 105 : cluster [DBG] pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:28.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:28 vm02 bash[17013]: cluster 2026-03-06T22:40:26.755367+0000 mgr.vm02.opvwec (mgr.14199) 105 : cluster [DBG] pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:29.140 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:40:29.532 INFO:teuthology.orchestra.run.vm02.stdout:[client.0] 2026-03-06T23:40:29.532 INFO:teuthology.orchestra.run.vm02.stdout: key = AQDdV6tpcPcaHxAAuN7WzpeXBxaE8U3j0O8ZfQ== 2026-03-06T23:40:29.589 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-06T23:40:29.589 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/etc/ceph/ceph.client.0.keyring 2026-03-06T23:40:29.589 DEBUG:teuthology.orchestra.run.vm02:> sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-03-06T23:40:29.601 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph auth get-or-create client.1 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-06T23:40:30.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:30 vm02 bash[17013]: cluster 2026-03-06T22:40:28.755674+0000 mgr.vm02.opvwec (mgr.14199) 106 : cluster [DBG] pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:30.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:30 vm02 bash[17013]: cluster 2026-03-06T22:40:28.755674+0000 mgr.vm02.opvwec (mgr.14199) 106 : cluster [DBG] pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:30.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:30 vm02 bash[17013]: audit 2026-03-06T22:40:29.521771+0000 mon.vm02 (mon.0) 614 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-06T23:40:30.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:30 vm02 bash[17013]: audit 2026-03-06T22:40:29.521771+0000 mon.vm02 (mon.0) 614 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-06T23:40:30.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:30 vm02 bash[17013]: audit 2026-03-06T22:40:29.524520+0000 mon.vm02 (mon.0) 615 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-06T23:40:30.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:30 vm02 bash[17013]: audit 2026-03-06T22:40:29.524520+0000 mon.vm02 (mon.0) 615 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-06T23:40:30.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:30 vm02 bash[17013]: audit 2026-03-06T22:40:29.524930+0000 mon.vm07 (mon.1) 23 : audit [INF] from='client.? 192.168.123.102:0/3873899729' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-06T23:40:30.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:30 vm02 bash[17013]: audit 2026-03-06T22:40:29.524930+0000 mon.vm07 (mon.1) 23 : audit [INF] from='client.? 192.168.123.102:0/3873899729' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-06T23:40:30.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:30 vm07 bash[20848]: cluster 2026-03-06T22:40:28.755674+0000 mgr.vm02.opvwec (mgr.14199) 106 : cluster [DBG] pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:30.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:30 vm07 bash[20848]: cluster 2026-03-06T22:40:28.755674+0000 mgr.vm02.opvwec (mgr.14199) 106 : cluster [DBG] pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:30.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:30 vm07 bash[20848]: audit 2026-03-06T22:40:29.521771+0000 mon.vm02 (mon.0) 614 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-06T23:40:30.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:30 vm07 bash[20848]: audit 2026-03-06T22:40:29.521771+0000 mon.vm02 (mon.0) 614 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-06T23:40:30.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:30 vm07 bash[20848]: audit 2026-03-06T22:40:29.524520+0000 mon.vm02 (mon.0) 615 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-06T23:40:30.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:30 vm07 bash[20848]: audit 2026-03-06T22:40:29.524520+0000 mon.vm02 (mon.0) 615 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-06T23:40:30.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:30 vm07 bash[20848]: audit 2026-03-06T22:40:29.524930+0000 mon.vm07 (mon.1) 23 : audit [INF] from='client.? 192.168.123.102:0/3873899729' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-06T23:40:30.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:30 vm07 bash[20848]: audit 2026-03-06T22:40:29.524930+0000 mon.vm07 (mon.1) 23 : audit [INF] from='client.? 192.168.123.102:0/3873899729' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-06T23:40:32.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:32 vm07 bash[20848]: cluster 2026-03-06T22:40:30.755907+0000 mgr.vm02.opvwec (mgr.14199) 107 : cluster [DBG] pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:32.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:32 vm07 bash[20848]: cluster 2026-03-06T22:40:30.755907+0000 mgr.vm02.opvwec (mgr.14199) 107 : cluster [DBG] pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:32.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:32 vm02 bash[17013]: cluster 2026-03-06T22:40:30.755907+0000 mgr.vm02.opvwec (mgr.14199) 107 : cluster [DBG] pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:32.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:32 vm02 bash[17013]: cluster 2026-03-06T22:40:30.755907+0000 mgr.vm02.opvwec (mgr.14199) 107 : cluster [DBG] pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:34.368 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm07/config 2026-03-06T23:40:34.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:34 vm07 bash[20848]: cluster 2026-03-06T22:40:32.756187+0000 mgr.vm02.opvwec (mgr.14199) 108 : cluster [DBG] pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:34.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:34 vm07 bash[20848]: cluster 2026-03-06T22:40:32.756187+0000 mgr.vm02.opvwec (mgr.14199) 108 : cluster [DBG] pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:34.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:34 vm02 bash[17013]: cluster 2026-03-06T22:40:32.756187+0000 mgr.vm02.opvwec (mgr.14199) 108 : cluster [DBG] pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:34.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:34 vm02 bash[17013]: cluster 2026-03-06T22:40:32.756187+0000 mgr.vm02.opvwec (mgr.14199) 108 : cluster [DBG] pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:34.760 INFO:teuthology.orchestra.run.vm07.stdout:[client.1] 2026-03-06T23:40:34.760 INFO:teuthology.orchestra.run.vm07.stdout: key = AQDiV6tp0yS3LBAA5wvhw/uMKH9aSgaztaYxiw== 2026-03-06T23:40:34.832 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-06T23:40:34.833 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/ceph/ceph.client.1.keyring 2026-03-06T23:40:34.833 DEBUG:teuthology.orchestra.run.vm07:> sudo chmod 0644 /etc/ceph/ceph.client.1.keyring 2026-03-06T23:40:34.843 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-06T23:40:34.844 INFO:tasks.cephadm.ceph_manager.ceph:waiting for mgr available 2026-03-06T23:40:34.844 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph mgr dump --format=json 2026-03-06T23:40:35.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:35 vm07 bash[20848]: audit 2026-03-06T22:40:34.750057+0000 mon.vm02 (mon.0) 616 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-06T23:40:35.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:35 vm07 bash[20848]: audit 2026-03-06T22:40:34.750057+0000 mon.vm02 (mon.0) 616 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-06T23:40:35.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:35 vm07 bash[20848]: audit 2026-03-06T22:40:34.753206+0000 mon.vm07 (mon.1) 24 : audit [INF] from='client.? 192.168.123.107:0/2664921800' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-06T23:40:35.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:35 vm07 bash[20848]: audit 2026-03-06T22:40:34.753206+0000 mon.vm07 (mon.1) 24 : audit [INF] from='client.? 192.168.123.107:0/2664921800' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-06T23:40:35.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:35 vm07 bash[20848]: audit 2026-03-06T22:40:34.753239+0000 mon.vm02 (mon.0) 617 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-06T23:40:35.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:35 vm07 bash[20848]: audit 2026-03-06T22:40:34.753239+0000 mon.vm02 (mon.0) 617 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-06T23:40:35.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:35 vm02 bash[17013]: audit 2026-03-06T22:40:34.750057+0000 mon.vm02 (mon.0) 616 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-06T23:40:35.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:35 vm02 bash[17013]: audit 2026-03-06T22:40:34.750057+0000 mon.vm02 (mon.0) 616 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-06T23:40:35.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:35 vm02 bash[17013]: audit 2026-03-06T22:40:34.753206+0000 mon.vm07 (mon.1) 24 : audit [INF] from='client.? 192.168.123.107:0/2664921800' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-06T23:40:35.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:35 vm02 bash[17013]: audit 2026-03-06T22:40:34.753206+0000 mon.vm07 (mon.1) 24 : audit [INF] from='client.? 192.168.123.107:0/2664921800' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-06T23:40:35.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:35 vm02 bash[17013]: audit 2026-03-06T22:40:34.753239+0000 mon.vm02 (mon.0) 617 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-06T23:40:35.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:35 vm02 bash[17013]: audit 2026-03-06T22:40:34.753239+0000 mon.vm02 (mon.0) 617 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-06T23:40:36.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:36 vm07 bash[20848]: cluster 2026-03-06T22:40:34.756493+0000 mgr.vm02.opvwec (mgr.14199) 109 : cluster [DBG] pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:36.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:36 vm07 bash[20848]: cluster 2026-03-06T22:40:34.756493+0000 mgr.vm02.opvwec (mgr.14199) 109 : cluster [DBG] pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:36.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:36 vm07 bash[20848]: audit 2026-03-06T22:40:35.800107+0000 mon.vm02 (mon.0) 618 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:40:36.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:36 vm07 bash[20848]: audit 2026-03-06T22:40:35.800107+0000 mon.vm02 (mon.0) 618 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:40:36.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:36 vm02 bash[17013]: cluster 2026-03-06T22:40:34.756493+0000 mgr.vm02.opvwec (mgr.14199) 109 : cluster [DBG] pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:36.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:36 vm02 bash[17013]: cluster 2026-03-06T22:40:34.756493+0000 mgr.vm02.opvwec (mgr.14199) 109 : cluster [DBG] pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:36.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:36 vm02 bash[17013]: audit 2026-03-06T22:40:35.800107+0000 mon.vm02 (mon.0) 618 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:40:36.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:36 vm02 bash[17013]: audit 2026-03-06T22:40:35.800107+0000 mon.vm02 (mon.0) 618 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:40:38.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:38 vm07 bash[20848]: cluster 2026-03-06T22:40:36.756757+0000 mgr.vm02.opvwec (mgr.14199) 110 : cluster [DBG] pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:38.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:38 vm07 bash[20848]: cluster 2026-03-06T22:40:36.756757+0000 mgr.vm02.opvwec (mgr.14199) 110 : cluster [DBG] pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:38.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:38 vm02 bash[17013]: cluster 2026-03-06T22:40:36.756757+0000 mgr.vm02.opvwec (mgr.14199) 110 : cluster [DBG] pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:38.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:38 vm02 bash[17013]: cluster 2026-03-06T22:40:36.756757+0000 mgr.vm02.opvwec (mgr.14199) 110 : cluster [DBG] pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:39.622 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:40:39.975 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:40:40.036 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":19,"flags":0,"active_gid":14199,"active_name":"vm02.opvwec","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6800","nonce":2981991092},{"type":"v1","addr":"192.168.123.102:6801","nonce":2981991092}]},"active_addr":"192.168.123.102:6801/2981991092","active_change":"2026-03-06T22:38:50.732865+0000","active_mgr_features":4540701547738038271,"available":true,"standbys":[{"gid":14214,"name":"vm07.jbleen","mgr_features":4540701547738038271,"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}]}],"modules":["cephadm","dashboard","iostat","nfs","prometheus","restful"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{"dashboard":"https://192.168.123.102:8443/","prometheus":"http://192.168.123.102:9283/"},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"squid":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"force_disabled_modules":{},"last_failure_osd_epoch":5,"active_clients":[{"name":"devicehealth","addrvec":[{"type":"v2","addr":"192.168.123.102:0","nonce":501107645}]},{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.102:0","nonce":35862587}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.102:0","nonce":3340164367}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.102:0","nonce":1537626553}]}]} 2026-03-06T23:40:40.038 INFO:tasks.cephadm.ceph_manager.ceph:mgr available! 2026-03-06T23:40:40.038 INFO:tasks.cephadm.ceph_manager.ceph:waiting for all up 2026-03-06T23:40:40.038 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph osd dump --format=json 2026-03-06T23:40:40.361 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:40 vm02 bash[17013]: cluster 2026-03-06T22:40:38.757037+0000 mgr.vm02.opvwec (mgr.14199) 111 : cluster [DBG] pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:40.361 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:40 vm02 bash[17013]: cluster 2026-03-06T22:40:38.757037+0000 mgr.vm02.opvwec (mgr.14199) 111 : cluster [DBG] pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:40.361 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:40 vm02 bash[17013]: audit 2026-03-06T22:40:39.968734+0000 mon.vm02 (mon.0) 619 : audit [DBG] from='client.? 192.168.123.102:0/2937533052' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-06T23:40:40.361 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:40 vm02 bash[17013]: audit 2026-03-06T22:40:39.968734+0000 mon.vm02 (mon.0) 619 : audit [DBG] from='client.? 192.168.123.102:0/2937533052' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-06T23:40:40.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:40 vm07 bash[20848]: cluster 2026-03-06T22:40:38.757037+0000 mgr.vm02.opvwec (mgr.14199) 111 : cluster [DBG] pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:40.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:40 vm07 bash[20848]: cluster 2026-03-06T22:40:38.757037+0000 mgr.vm02.opvwec (mgr.14199) 111 : cluster [DBG] pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:40.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:40 vm07 bash[20848]: audit 2026-03-06T22:40:39.968734+0000 mon.vm02 (mon.0) 619 : audit [DBG] from='client.? 192.168.123.102:0/2937533052' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-06T23:40:40.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:40 vm07 bash[20848]: audit 2026-03-06T22:40:39.968734+0000 mon.vm02 (mon.0) 619 : audit [DBG] from='client.? 192.168.123.102:0/2937533052' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-06T23:40:42.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:42 vm07 bash[20848]: cluster 2026-03-06T22:40:40.757313+0000 mgr.vm02.opvwec (mgr.14199) 112 : cluster [DBG] pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:42.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:42 vm07 bash[20848]: cluster 2026-03-06T22:40:40.757313+0000 mgr.vm02.opvwec (mgr.14199) 112 : cluster [DBG] pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:42.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:42 vm02 bash[17013]: cluster 2026-03-06T22:40:40.757313+0000 mgr.vm02.opvwec (mgr.14199) 112 : cluster [DBG] pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:42.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:42 vm02 bash[17013]: cluster 2026-03-06T22:40:40.757313+0000 mgr.vm02.opvwec (mgr.14199) 112 : cluster [DBG] pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:44.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:44 vm07 bash[20848]: cluster 2026-03-06T22:40:42.757603+0000 mgr.vm02.opvwec (mgr.14199) 113 : cluster [DBG] pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:44.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:44 vm07 bash[20848]: cluster 2026-03-06T22:40:42.757603+0000 mgr.vm02.opvwec (mgr.14199) 113 : cluster [DBG] pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:44.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:44 vm02 bash[17013]: cluster 2026-03-06T22:40:42.757603+0000 mgr.vm02.opvwec (mgr.14199) 113 : cluster [DBG] pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:44.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:44 vm02 bash[17013]: cluster 2026-03-06T22:40:42.757603+0000 mgr.vm02.opvwec (mgr.14199) 113 : cluster [DBG] pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:44.815 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:40:45.161 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:40:45.162 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":24,"fsid":"f8b8c16a-19ac-11f1-87e7-9b7402b99c44","created":"2026-03-06T22:37:19.213016+0000","modified":"2026-03-06T22:40:11.444437+0000","last_up_change":"2026-03-06T22:40:09.437893+0000","last_in_change":"2026-03-06T22:39:44.682915+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":9,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-06T22:40:06.797344+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"24","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"418e828c-709f-40ee-9849-890589b82337","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":17,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6800","nonce":1132330073},{"type":"v1","addr":"192.168.123.107:6801","nonce":1132330073}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6802","nonce":1132330073},{"type":"v1","addr":"192.168.123.107:6803","nonce":1132330073}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6806","nonce":1132330073},{"type":"v1","addr":"192.168.123.107:6807","nonce":1132330073}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6804","nonce":1132330073},{"type":"v1","addr":"192.168.123.107:6805","nonce":1132330073}]},"public_addr":"192.168.123.107:6801/1132330073","cluster_addr":"192.168.123.107:6803/1132330073","heartbeat_back_addr":"192.168.123.107:6807/1132330073","heartbeat_front_addr":"192.168.123.107:6805/1132330073","state":["exists","up"]},{"osd":1,"uuid":"ace25c81-45bd-4eb3-b02f-ff194f355af7","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":16,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6802","nonce":1450039054},{"type":"v1","addr":"192.168.123.102:6803","nonce":1450039054}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6804","nonce":1450039054},{"type":"v1","addr":"192.168.123.102:6805","nonce":1450039054}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6808","nonce":1450039054},{"type":"v1","addr":"192.168.123.102:6809","nonce":1450039054}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6806","nonce":1450039054},{"type":"v1","addr":"192.168.123.102:6807","nonce":1450039054}]},"public_addr":"192.168.123.102:6803/1450039054","cluster_addr":"192.168.123.102:6805/1450039054","heartbeat_back_addr":"192.168.123.102:6809/1450039054","heartbeat_front_addr":"192.168.123.102:6807/1450039054","state":["exists","up"]},{"osd":2,"uuid":"5e438db1-97e7-4551-a2a8-5b5117692f52","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":19,"up_thru":20,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6808","nonce":3130883557},{"type":"v1","addr":"192.168.123.107:6809","nonce":3130883557}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6810","nonce":3130883557},{"type":"v1","addr":"192.168.123.107:6811","nonce":3130883557}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6814","nonce":3130883557},{"type":"v1","addr":"192.168.123.107:6815","nonce":3130883557}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6812","nonce":3130883557},{"type":"v1","addr":"192.168.123.107:6813","nonce":3130883557}]},"public_addr":"192.168.123.107:6809/3130883557","cluster_addr":"192.168.123.107:6811/3130883557","heartbeat_back_addr":"192.168.123.107:6815/3130883557","heartbeat_front_addr":"192.168.123.107:6813/3130883557","state":["exists","up"]},{"osd":3,"uuid":"0b2838af-d2fd-47a1-a00c-95a72f13f66a","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":19,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6810","nonce":1555906781},{"type":"v1","addr":"192.168.123.102:6811","nonce":1555906781}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6812","nonce":1555906781},{"type":"v1","addr":"192.168.123.102:6813","nonce":1555906781}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6816","nonce":1555906781},{"type":"v1","addr":"192.168.123.102:6817","nonce":1555906781}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6814","nonce":1555906781},{"type":"v1","addr":"192.168.123.102:6815","nonce":1555906781}]},"public_addr":"192.168.123.102:6811/1555906781","cluster_addr":"192.168.123.102:6813/1555906781","heartbeat_back_addr":"192.168.123.102:6817/1555906781","heartbeat_front_addr":"192.168.123.102:6815/1555906781","state":["exists","up"]},{"osd":4,"uuid":"8af0222d-7b05-4f10-a678-5f0008c2f8f8","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":22,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6818","nonce":2614111495},{"type":"v1","addr":"192.168.123.102:6819","nonce":2614111495}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6820","nonce":2614111495},{"type":"v1","addr":"192.168.123.102:6821","nonce":2614111495}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6824","nonce":2614111495},{"type":"v1","addr":"192.168.123.102:6825","nonce":2614111495}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6822","nonce":2614111495},{"type":"v1","addr":"192.168.123.102:6823","nonce":2614111495}]},"public_addr":"192.168.123.102:6819/2614111495","cluster_addr":"192.168.123.102:6821/2614111495","heartbeat_back_addr":"192.168.123.102:6825/2614111495","heartbeat_front_addr":"192.168.123.102:6823/2614111495","state":["exists","up"]},{"osd":5,"uuid":"e23d8375-e171-457c-a818-baefaf27ce5c","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":20,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6816","nonce":1931549358},{"type":"v1","addr":"192.168.123.107:6817","nonce":1931549358}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6818","nonce":1931549358},{"type":"v1","addr":"192.168.123.107:6819","nonce":1931549358}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6822","nonce":1931549358},{"type":"v1","addr":"192.168.123.107:6823","nonce":1931549358}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6820","nonce":1931549358},{"type":"v1","addr":"192.168.123.107:6821","nonce":1931549358}]},"public_addr":"192.168.123.107:6817/1931549358","cluster_addr":"192.168.123.107:6819/1931549358","heartbeat_back_addr":"192.168.123.107:6823/1931549358","heartbeat_front_addr":"192.168.123.107:6821/1931549358","state":["exists","up"]},{"osd":6,"uuid":"323a807a-94bd-4543-a9ad-add56a77e9da","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":21,"up_thru":22,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6824","nonce":2542357446},{"type":"v1","addr":"192.168.123.107:6825","nonce":2542357446}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6826","nonce":2542357446},{"type":"v1","addr":"192.168.123.107:6827","nonce":2542357446}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6830","nonce":2542357446},{"type":"v1","addr":"192.168.123.107:6831","nonce":2542357446}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6828","nonce":2542357446},{"type":"v1","addr":"192.168.123.107:6829","nonce":2542357446}]},"public_addr":"192.168.123.107:6825/2542357446","cluster_addr":"192.168.123.107:6827/2542357446","heartbeat_back_addr":"192.168.123.107:6831/2542357446","heartbeat_front_addr":"192.168.123.107:6829/2542357446","state":["exists","up"]},{"osd":7,"uuid":"6a439063-03a8-4958-811b-6a2933fe0919","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":22,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6826","nonce":322732066},{"type":"v1","addr":"192.168.123.102:6827","nonce":322732066}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6828","nonce":322732066},{"type":"v1","addr":"192.168.123.102:6829","nonce":322732066}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6832","nonce":322732066},{"type":"v1","addr":"192.168.123.102:6833","nonce":322732066}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6830","nonce":322732066},{"type":"v1","addr":"192.168.123.102:6831","nonce":322732066}]},"public_addr":"192.168.123.102:6827/322732066","cluster_addr":"192.168.123.102:6829/322732066","heartbeat_back_addr":"192.168.123.102:6833/322732066","heartbeat_front_addr":"192.168.123.102:6831/322732066","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-06T22:40:02.350162+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-06T22:40:01.875501+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-06T22:40:04.990498+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-06T22:40:04.603391+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-06T22:40:06.423059+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-06T22:40:05.905099+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-06T22:40:06.921201+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-06T22:40:07.223661+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.102:0/751966813":"2026-03-07T22:37:44.254116+0000","192.168.123.102:0/1535656930":"2026-03-07T22:38:50.732745+0000","192.168.123.102:6800/58349791":"2026-03-07T22:38:50.732745+0000","192.168.123.102:6800/504586972":"2026-03-07T22:37:44.254116+0000","192.168.123.102:0/846830622":"2026-03-07T22:38:05.493365+0000","192.168.123.102:0/575637168":"2026-03-07T22:38:05.493365+0000","192.168.123.102:0/106909585":"2026-03-07T22:38:50.732745+0000","192.168.123.102:6801/504586972":"2026-03-07T22:37:44.254116+0000","192.168.123.102:0/3124113404":"2026-03-07T22:37:44.254116+0000","192.168.123.102:0/2969876558":"2026-03-07T22:37:44.254116+0000","192.168.123.102:0/3545077305":"2026-03-07T22:38:05.493365+0000","192.168.123.102:6800/2494222457":"2026-03-07T22:38:05.493365+0000","192.168.123.102:6801/58349791":"2026-03-07T22:38:50.732745+0000","192.168.123.102:6801/2494222457":"2026-03-07T22:38:05.493365+0000","192.168.123.102:0/3879353166":"2026-03-07T22:38:50.732745+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-06T23:40:45.222 INFO:tasks.cephadm.ceph_manager.ceph:all up! 2026-03-06T23:40:45.222 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph osd dump --format=json 2026-03-06T23:40:45.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:45 vm02 bash[17013]: audit 2026-03-06T22:40:45.155984+0000 mon.vm02 (mon.0) 620 : audit [DBG] from='client.? 192.168.123.102:0/882440986' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-06T23:40:45.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:45 vm02 bash[17013]: audit 2026-03-06T22:40:45.155984+0000 mon.vm02 (mon.0) 620 : audit [DBG] from='client.? 192.168.123.102:0/882440986' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-06T23:40:45.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:45 vm07 bash[20848]: audit 2026-03-06T22:40:45.155984+0000 mon.vm02 (mon.0) 620 : audit [DBG] from='client.? 192.168.123.102:0/882440986' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-06T23:40:45.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:45 vm07 bash[20848]: audit 2026-03-06T22:40:45.155984+0000 mon.vm02 (mon.0) 620 : audit [DBG] from='client.? 192.168.123.102:0/882440986' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-06T23:40:46.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:46 vm07 bash[20848]: cluster 2026-03-06T22:40:44.757906+0000 mgr.vm02.opvwec (mgr.14199) 114 : cluster [DBG] pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:46.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:46 vm07 bash[20848]: cluster 2026-03-06T22:40:44.757906+0000 mgr.vm02.opvwec (mgr.14199) 114 : cluster [DBG] pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:46.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:46 vm02 bash[17013]: cluster 2026-03-06T22:40:44.757906+0000 mgr.vm02.opvwec (mgr.14199) 114 : cluster [DBG] pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:46.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:46 vm02 bash[17013]: cluster 2026-03-06T22:40:44.757906+0000 mgr.vm02.opvwec (mgr.14199) 114 : cluster [DBG] pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:48.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:48 vm07 bash[20848]: cluster 2026-03-06T22:40:46.758157+0000 mgr.vm02.opvwec (mgr.14199) 115 : cluster [DBG] pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:48.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:48 vm07 bash[20848]: cluster 2026-03-06T22:40:46.758157+0000 mgr.vm02.opvwec (mgr.14199) 115 : cluster [DBG] pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:48.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:48 vm02 bash[17013]: cluster 2026-03-06T22:40:46.758157+0000 mgr.vm02.opvwec (mgr.14199) 115 : cluster [DBG] pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:48.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:48 vm02 bash[17013]: cluster 2026-03-06T22:40:46.758157+0000 mgr.vm02.opvwec (mgr.14199) 115 : cluster [DBG] pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:50.003 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:40:50.338 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:40:50.338 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":24,"fsid":"f8b8c16a-19ac-11f1-87e7-9b7402b99c44","created":"2026-03-06T22:37:19.213016+0000","modified":"2026-03-06T22:40:11.444437+0000","last_up_change":"2026-03-06T22:40:09.437893+0000","last_in_change":"2026-03-06T22:39:44.682915+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":9,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-06T22:40:06.797344+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"24","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"418e828c-709f-40ee-9849-890589b82337","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":17,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6800","nonce":1132330073},{"type":"v1","addr":"192.168.123.107:6801","nonce":1132330073}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6802","nonce":1132330073},{"type":"v1","addr":"192.168.123.107:6803","nonce":1132330073}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6806","nonce":1132330073},{"type":"v1","addr":"192.168.123.107:6807","nonce":1132330073}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6804","nonce":1132330073},{"type":"v1","addr":"192.168.123.107:6805","nonce":1132330073}]},"public_addr":"192.168.123.107:6801/1132330073","cluster_addr":"192.168.123.107:6803/1132330073","heartbeat_back_addr":"192.168.123.107:6807/1132330073","heartbeat_front_addr":"192.168.123.107:6805/1132330073","state":["exists","up"]},{"osd":1,"uuid":"ace25c81-45bd-4eb3-b02f-ff194f355af7","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":16,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6802","nonce":1450039054},{"type":"v1","addr":"192.168.123.102:6803","nonce":1450039054}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6804","nonce":1450039054},{"type":"v1","addr":"192.168.123.102:6805","nonce":1450039054}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6808","nonce":1450039054},{"type":"v1","addr":"192.168.123.102:6809","nonce":1450039054}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6806","nonce":1450039054},{"type":"v1","addr":"192.168.123.102:6807","nonce":1450039054}]},"public_addr":"192.168.123.102:6803/1450039054","cluster_addr":"192.168.123.102:6805/1450039054","heartbeat_back_addr":"192.168.123.102:6809/1450039054","heartbeat_front_addr":"192.168.123.102:6807/1450039054","state":["exists","up"]},{"osd":2,"uuid":"5e438db1-97e7-4551-a2a8-5b5117692f52","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":19,"up_thru":20,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6808","nonce":3130883557},{"type":"v1","addr":"192.168.123.107:6809","nonce":3130883557}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6810","nonce":3130883557},{"type":"v1","addr":"192.168.123.107:6811","nonce":3130883557}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6814","nonce":3130883557},{"type":"v1","addr":"192.168.123.107:6815","nonce":3130883557}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6812","nonce":3130883557},{"type":"v1","addr":"192.168.123.107:6813","nonce":3130883557}]},"public_addr":"192.168.123.107:6809/3130883557","cluster_addr":"192.168.123.107:6811/3130883557","heartbeat_back_addr":"192.168.123.107:6815/3130883557","heartbeat_front_addr":"192.168.123.107:6813/3130883557","state":["exists","up"]},{"osd":3,"uuid":"0b2838af-d2fd-47a1-a00c-95a72f13f66a","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":19,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6810","nonce":1555906781},{"type":"v1","addr":"192.168.123.102:6811","nonce":1555906781}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6812","nonce":1555906781},{"type":"v1","addr":"192.168.123.102:6813","nonce":1555906781}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6816","nonce":1555906781},{"type":"v1","addr":"192.168.123.102:6817","nonce":1555906781}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6814","nonce":1555906781},{"type":"v1","addr":"192.168.123.102:6815","nonce":1555906781}]},"public_addr":"192.168.123.102:6811/1555906781","cluster_addr":"192.168.123.102:6813/1555906781","heartbeat_back_addr":"192.168.123.102:6817/1555906781","heartbeat_front_addr":"192.168.123.102:6815/1555906781","state":["exists","up"]},{"osd":4,"uuid":"8af0222d-7b05-4f10-a678-5f0008c2f8f8","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":22,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6818","nonce":2614111495},{"type":"v1","addr":"192.168.123.102:6819","nonce":2614111495}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6820","nonce":2614111495},{"type":"v1","addr":"192.168.123.102:6821","nonce":2614111495}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6824","nonce":2614111495},{"type":"v1","addr":"192.168.123.102:6825","nonce":2614111495}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6822","nonce":2614111495},{"type":"v1","addr":"192.168.123.102:6823","nonce":2614111495}]},"public_addr":"192.168.123.102:6819/2614111495","cluster_addr":"192.168.123.102:6821/2614111495","heartbeat_back_addr":"192.168.123.102:6825/2614111495","heartbeat_front_addr":"192.168.123.102:6823/2614111495","state":["exists","up"]},{"osd":5,"uuid":"e23d8375-e171-457c-a818-baefaf27ce5c","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":20,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6816","nonce":1931549358},{"type":"v1","addr":"192.168.123.107:6817","nonce":1931549358}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6818","nonce":1931549358},{"type":"v1","addr":"192.168.123.107:6819","nonce":1931549358}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6822","nonce":1931549358},{"type":"v1","addr":"192.168.123.107:6823","nonce":1931549358}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6820","nonce":1931549358},{"type":"v1","addr":"192.168.123.107:6821","nonce":1931549358}]},"public_addr":"192.168.123.107:6817/1931549358","cluster_addr":"192.168.123.107:6819/1931549358","heartbeat_back_addr":"192.168.123.107:6823/1931549358","heartbeat_front_addr":"192.168.123.107:6821/1931549358","state":["exists","up"]},{"osd":6,"uuid":"323a807a-94bd-4543-a9ad-add56a77e9da","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":21,"up_thru":22,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6824","nonce":2542357446},{"type":"v1","addr":"192.168.123.107:6825","nonce":2542357446}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6826","nonce":2542357446},{"type":"v1","addr":"192.168.123.107:6827","nonce":2542357446}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6830","nonce":2542357446},{"type":"v1","addr":"192.168.123.107:6831","nonce":2542357446}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6828","nonce":2542357446},{"type":"v1","addr":"192.168.123.107:6829","nonce":2542357446}]},"public_addr":"192.168.123.107:6825/2542357446","cluster_addr":"192.168.123.107:6827/2542357446","heartbeat_back_addr":"192.168.123.107:6831/2542357446","heartbeat_front_addr":"192.168.123.107:6829/2542357446","state":["exists","up"]},{"osd":7,"uuid":"6a439063-03a8-4958-811b-6a2933fe0919","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":22,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6826","nonce":322732066},{"type":"v1","addr":"192.168.123.102:6827","nonce":322732066}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6828","nonce":322732066},{"type":"v1","addr":"192.168.123.102:6829","nonce":322732066}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6832","nonce":322732066},{"type":"v1","addr":"192.168.123.102:6833","nonce":322732066}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6830","nonce":322732066},{"type":"v1","addr":"192.168.123.102:6831","nonce":322732066}]},"public_addr":"192.168.123.102:6827/322732066","cluster_addr":"192.168.123.102:6829/322732066","heartbeat_back_addr":"192.168.123.102:6833/322732066","heartbeat_front_addr":"192.168.123.102:6831/322732066","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-06T22:40:02.350162+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-06T22:40:01.875501+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-06T22:40:04.990498+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-06T22:40:04.603391+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-06T22:40:06.423059+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-06T22:40:05.905099+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-06T22:40:06.921201+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-06T22:40:07.223661+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.102:0/751966813":"2026-03-07T22:37:44.254116+0000","192.168.123.102:0/1535656930":"2026-03-07T22:38:50.732745+0000","192.168.123.102:6800/58349791":"2026-03-07T22:38:50.732745+0000","192.168.123.102:6800/504586972":"2026-03-07T22:37:44.254116+0000","192.168.123.102:0/846830622":"2026-03-07T22:38:05.493365+0000","192.168.123.102:0/575637168":"2026-03-07T22:38:05.493365+0000","192.168.123.102:0/106909585":"2026-03-07T22:38:50.732745+0000","192.168.123.102:6801/504586972":"2026-03-07T22:37:44.254116+0000","192.168.123.102:0/3124113404":"2026-03-07T22:37:44.254116+0000","192.168.123.102:0/2969876558":"2026-03-07T22:37:44.254116+0000","192.168.123.102:0/3545077305":"2026-03-07T22:38:05.493365+0000","192.168.123.102:6800/2494222457":"2026-03-07T22:38:05.493365+0000","192.168.123.102:6801/58349791":"2026-03-07T22:38:50.732745+0000","192.168.123.102:6801/2494222457":"2026-03-07T22:38:05.493365+0000","192.168.123.102:0/3879353166":"2026-03-07T22:38:50.732745+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-06T23:40:50.397 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph tell osd.0 flush_pg_stats 2026-03-06T23:40:50.398 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph tell osd.1 flush_pg_stats 2026-03-06T23:40:50.398 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph tell osd.2 flush_pg_stats 2026-03-06T23:40:50.398 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph tell osd.3 flush_pg_stats 2026-03-06T23:40:50.398 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph tell osd.4 flush_pg_stats 2026-03-06T23:40:50.398 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph tell osd.5 flush_pg_stats 2026-03-06T23:40:50.398 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph tell osd.6 flush_pg_stats 2026-03-06T23:40:50.399 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph tell osd.7 flush_pg_stats 2026-03-06T23:40:50.645 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:50 vm02 bash[17013]: cluster 2026-03-06T22:40:48.758416+0000 mgr.vm02.opvwec (mgr.14199) 116 : cluster [DBG] pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:50.645 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:50 vm02 bash[17013]: cluster 2026-03-06T22:40:48.758416+0000 mgr.vm02.opvwec (mgr.14199) 116 : cluster [DBG] pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:50.645 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:50 vm02 bash[17013]: audit 2026-03-06T22:40:50.332692+0000 mon.vm02 (mon.0) 621 : audit [DBG] from='client.? 192.168.123.102:0/90900500' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-06T23:40:50.645 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:50 vm02 bash[17013]: audit 2026-03-06T22:40:50.332692+0000 mon.vm02 (mon.0) 621 : audit [DBG] from='client.? 192.168.123.102:0/90900500' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-06T23:40:50.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:50 vm07 bash[20848]: cluster 2026-03-06T22:40:48.758416+0000 mgr.vm02.opvwec (mgr.14199) 116 : cluster [DBG] pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:50.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:50 vm07 bash[20848]: cluster 2026-03-06T22:40:48.758416+0000 mgr.vm02.opvwec (mgr.14199) 116 : cluster [DBG] pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:50.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:50 vm07 bash[20848]: audit 2026-03-06T22:40:50.332692+0000 mon.vm02 (mon.0) 621 : audit [DBG] from='client.? 192.168.123.102:0/90900500' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-06T23:40:50.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:50 vm07 bash[20848]: audit 2026-03-06T22:40:50.332692+0000 mon.vm02 (mon.0) 621 : audit [DBG] from='client.? 192.168.123.102:0/90900500' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-06T23:40:51.416 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:51 vm02 bash[17013]: audit 2026-03-06T22:40:50.800254+0000 mon.vm02 (mon.0) 622 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:40:51.416 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:51 vm02 bash[17013]: audit 2026-03-06T22:40:50.800254+0000 mon.vm02 (mon.0) 622 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:40:51.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:51 vm07 bash[20848]: audit 2026-03-06T22:40:50.800254+0000 mon.vm02 (mon.0) 622 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:40:51.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:51 vm07 bash[20848]: audit 2026-03-06T22:40:50.800254+0000 mon.vm02 (mon.0) 622 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:40:52.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:52 vm07 bash[20848]: cluster 2026-03-06T22:40:50.758686+0000 mgr.vm02.opvwec (mgr.14199) 117 : cluster [DBG] pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:52.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:52 vm07 bash[20848]: cluster 2026-03-06T22:40:50.758686+0000 mgr.vm02.opvwec (mgr.14199) 117 : cluster [DBG] pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:52.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:52 vm02 bash[17013]: cluster 2026-03-06T22:40:50.758686+0000 mgr.vm02.opvwec (mgr.14199) 117 : cluster [DBG] pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:52.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:52 vm02 bash[17013]: cluster 2026-03-06T22:40:50.758686+0000 mgr.vm02.opvwec (mgr.14199) 117 : cluster [DBG] pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:54.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:54 vm07 bash[20848]: cluster 2026-03-06T22:40:52.758936+0000 mgr.vm02.opvwec (mgr.14199) 118 : cluster [DBG] pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:54.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:54 vm07 bash[20848]: cluster 2026-03-06T22:40:52.758936+0000 mgr.vm02.opvwec (mgr.14199) 118 : cluster [DBG] pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:54.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:54 vm02 bash[17013]: cluster 2026-03-06T22:40:52.758936+0000 mgr.vm02.opvwec (mgr.14199) 118 : cluster [DBG] pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:54.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:54 vm02 bash[17013]: cluster 2026-03-06T22:40:52.758936+0000 mgr.vm02.opvwec (mgr.14199) 118 : cluster [DBG] pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:56.042 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:40:56.042 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:40:56.044 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:40:56.046 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:40:56.046 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:40:56.048 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:40:56.049 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:40:56.051 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:40:56.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:56 vm07 bash[20848]: cluster 2026-03-06T22:40:54.759240+0000 mgr.vm02.opvwec (mgr.14199) 119 : cluster [DBG] pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:56.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:56 vm07 bash[20848]: cluster 2026-03-06T22:40:54.759240+0000 mgr.vm02.opvwec (mgr.14199) 119 : cluster [DBG] pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:56.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:56 vm02 bash[17013]: cluster 2026-03-06T22:40:54.759240+0000 mgr.vm02.opvwec (mgr.14199) 119 : cluster [DBG] pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:56.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:56 vm02 bash[17013]: cluster 2026-03-06T22:40:54.759240+0000 mgr.vm02.opvwec (mgr.14199) 119 : cluster [DBG] pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:57.122 INFO:teuthology.orchestra.run.vm02.stdout:94489280523 2026-03-06T23:40:57.122 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph osd last-stat-seq osd.4 2026-03-06T23:40:57.154 INFO:teuthology.orchestra.run.vm02.stdout:85899345931 2026-03-06T23:40:57.154 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph osd last-stat-seq osd.5 2026-03-06T23:40:57.211 INFO:teuthology.orchestra.run.vm02.stdout:68719476748 2026-03-06T23:40:57.211 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph osd last-stat-seq osd.1 2026-03-06T23:40:57.407 INFO:teuthology.orchestra.run.vm02.stdout:81604378636 2026-03-06T23:40:57.408 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph osd last-stat-seq osd.2 2026-03-06T23:40:57.431 INFO:teuthology.orchestra.run.vm02.stdout:90194313227 2026-03-06T23:40:57.431 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph osd last-stat-seq osd.6 2026-03-06T23:40:57.435 INFO:teuthology.orchestra.run.vm02.stdout:73014444044 2026-03-06T23:40:57.435 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph osd last-stat-seq osd.0 2026-03-06T23:40:57.447 INFO:teuthology.orchestra.run.vm02.stdout:81604378636 2026-03-06T23:40:57.447 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph osd last-stat-seq osd.3 2026-03-06T23:40:57.483 INFO:teuthology.orchestra.run.vm02.stdout:94489280523 2026-03-06T23:40:57.484 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph osd last-stat-seq osd.7 2026-03-06T23:40:58.434 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:58 vm02 bash[17013]: cluster 2026-03-06T22:40:56.759509+0000 mgr.vm02.opvwec (mgr.14199) 120 : cluster [DBG] pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:58.434 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:40:58 vm02 bash[17013]: cluster 2026-03-06T22:40:56.759509+0000 mgr.vm02.opvwec (mgr.14199) 120 : cluster [DBG] pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:58.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:58 vm07 bash[20848]: cluster 2026-03-06T22:40:56.759509+0000 mgr.vm02.opvwec (mgr.14199) 120 : cluster [DBG] pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:40:58.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:40:58 vm07 bash[20848]: cluster 2026-03-06T22:40:56.759509+0000 mgr.vm02.opvwec (mgr.14199) 120 : cluster [DBG] pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:00.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:00 vm07 bash[20848]: cluster 2026-03-06T22:40:58.759827+0000 mgr.vm02.opvwec (mgr.14199) 121 : cluster [DBG] pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:00.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:00 vm07 bash[20848]: cluster 2026-03-06T22:40:58.759827+0000 mgr.vm02.opvwec (mgr.14199) 121 : cluster [DBG] pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:00.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:00 vm02 bash[17013]: cluster 2026-03-06T22:40:58.759827+0000 mgr.vm02.opvwec (mgr.14199) 121 : cluster [DBG] pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:00.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:00 vm02 bash[17013]: cluster 2026-03-06T22:40:58.759827+0000 mgr.vm02.opvwec (mgr.14199) 121 : cluster [DBG] pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:02.702 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:41:02.703 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:41:02.703 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:41:02.704 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:41:02.705 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:41:02.708 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:41:02.710 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:41:02.711 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:41:02.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:02 vm07 bash[20848]: cluster 2026-03-06T22:41:00.760093+0000 mgr.vm02.opvwec (mgr.14199) 122 : cluster [DBG] pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:02.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:02 vm07 bash[20848]: cluster 2026-03-06T22:41:00.760093+0000 mgr.vm02.opvwec (mgr.14199) 122 : cluster [DBG] pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:02.728 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:02 vm02 bash[17013]: cluster 2026-03-06T22:41:00.760093+0000 mgr.vm02.opvwec (mgr.14199) 122 : cluster [DBG] pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:02.728 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:02 vm02 bash[17013]: cluster 2026-03-06T22:41:00.760093+0000 mgr.vm02.opvwec (mgr.14199) 122 : cluster [DBG] pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:03.612 INFO:teuthology.orchestra.run.vm02.stdout:94489280524 2026-03-06T23:41:03.788 INFO:tasks.cephadm.ceph_manager.ceph:need seq 94489280523 got 94489280524 for osd.7 2026-03-06T23:41:03.788 DEBUG:teuthology.parallel:result is None 2026-03-06T23:41:03.925 INFO:teuthology.orchestra.run.vm02.stdout:73014444045 2026-03-06T23:41:03.971 INFO:teuthology.orchestra.run.vm02.stdout:94489280524 2026-03-06T23:41:04.039 INFO:tasks.cephadm.ceph_manager.ceph:need seq 73014444044 got 73014444045 for osd.0 2026-03-06T23:41:04.039 DEBUG:teuthology.parallel:result is None 2026-03-06T23:41:04.122 INFO:tasks.cephadm.ceph_manager.ceph:need seq 94489280523 got 94489280524 for osd.4 2026-03-06T23:41:04.122 DEBUG:teuthology.parallel:result is None 2026-03-06T23:41:04.145 INFO:teuthology.orchestra.run.vm02.stdout:81604378637 2026-03-06T23:41:04.153 INFO:teuthology.orchestra.run.vm02.stdout:68719476749 2026-03-06T23:41:04.169 INFO:teuthology.orchestra.run.vm02.stdout:85899345932 2026-03-06T23:41:04.170 INFO:teuthology.orchestra.run.vm02.stdout:81604378637 2026-03-06T23:41:04.220 INFO:teuthology.orchestra.run.vm02.stdout:90194313228 2026-03-06T23:41:04.388 INFO:tasks.cephadm.ceph_manager.ceph:need seq 90194313227 got 90194313228 for osd.6 2026-03-06T23:41:04.388 DEBUG:teuthology.parallel:result is None 2026-03-06T23:41:04.396 INFO:tasks.cephadm.ceph_manager.ceph:need seq 81604378636 got 81604378637 for osd.2 2026-03-06T23:41:04.396 DEBUG:teuthology.parallel:result is None 2026-03-06T23:41:04.396 INFO:tasks.cephadm.ceph_manager.ceph:need seq 85899345931 got 85899345932 for osd.5 2026-03-06T23:41:04.396 DEBUG:teuthology.parallel:result is None 2026-03-06T23:41:04.411 INFO:tasks.cephadm.ceph_manager.ceph:need seq 68719476748 got 68719476749 for osd.1 2026-03-06T23:41:04.411 DEBUG:teuthology.parallel:result is None 2026-03-06T23:41:04.413 INFO:tasks.cephadm.ceph_manager.ceph:need seq 81604378636 got 81604378637 for osd.3 2026-03-06T23:41:04.413 DEBUG:teuthology.parallel:result is None 2026-03-06T23:41:04.413 INFO:tasks.cephadm.ceph_manager.ceph:waiting for clean 2026-03-06T23:41:04.413 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph pg dump --format=json 2026-03-06T23:41:04.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:04 vm02 bash[17013]: cluster 2026-03-06T22:41:02.760354+0000 mgr.vm02.opvwec (mgr.14199) 123 : cluster [DBG] pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:04.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:04 vm02 bash[17013]: cluster 2026-03-06T22:41:02.760354+0000 mgr.vm02.opvwec (mgr.14199) 123 : cluster [DBG] pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:04.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:04 vm02 bash[17013]: audit 2026-03-06T22:41:03.610429+0000 mon.vm07 (mon.1) 25 : audit [DBG] from='client.? 192.168.123.102:0/326758799' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-06T23:41:04.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:04 vm02 bash[17013]: audit 2026-03-06T22:41:03.610429+0000 mon.vm07 (mon.1) 25 : audit [DBG] from='client.? 192.168.123.102:0/326758799' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-06T23:41:04.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:04 vm02 bash[17013]: audit 2026-03-06T22:41:03.911975+0000 mon.vm07 (mon.1) 26 : audit [DBG] from='client.? 192.168.123.102:0/4098971467' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-06T23:41:04.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:04 vm02 bash[17013]: audit 2026-03-06T22:41:03.911975+0000 mon.vm07 (mon.1) 26 : audit [DBG] from='client.? 192.168.123.102:0/4098971467' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-06T23:41:04.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:04 vm02 bash[17013]: audit 2026-03-06T22:41:03.966684+0000 mon.vm02 (mon.0) 623 : audit [DBG] from='client.? 192.168.123.102:0/3019377510' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-06T23:41:04.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:04 vm02 bash[17013]: audit 2026-03-06T22:41:03.966684+0000 mon.vm02 (mon.0) 623 : audit [DBG] from='client.? 192.168.123.102:0/3019377510' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-06T23:41:04.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:04 vm02 bash[17013]: audit 2026-03-06T22:41:04.142590+0000 mon.vm07 (mon.1) 27 : audit [DBG] from='client.? 192.168.123.102:0/2117638262' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-06T23:41:04.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:04 vm02 bash[17013]: audit 2026-03-06T22:41:04.142590+0000 mon.vm07 (mon.1) 27 : audit [DBG] from='client.? 192.168.123.102:0/2117638262' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-06T23:41:04.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:04 vm02 bash[17013]: audit 2026-03-06T22:41:04.146595+0000 mon.vm07 (mon.1) 28 : audit [DBG] from='client.? 192.168.123.102:0/3202924947' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-06T23:41:04.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:04 vm02 bash[17013]: audit 2026-03-06T22:41:04.146595+0000 mon.vm07 (mon.1) 28 : audit [DBG] from='client.? 192.168.123.102:0/3202924947' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-06T23:41:04.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:04 vm02 bash[17013]: audit 2026-03-06T22:41:04.157456+0000 mon.vm02 (mon.0) 624 : audit [DBG] from='client.? 192.168.123.102:0/3986648851' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-06T23:41:04.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:04 vm02 bash[17013]: audit 2026-03-06T22:41:04.157456+0000 mon.vm02 (mon.0) 624 : audit [DBG] from='client.? 192.168.123.102:0/3986648851' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-06T23:41:04.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:04 vm02 bash[17013]: audit 2026-03-06T22:41:04.164601+0000 mon.vm02 (mon.0) 625 : audit [DBG] from='client.? 192.168.123.102:0/3862479504' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-06T23:41:04.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:04 vm02 bash[17013]: audit 2026-03-06T22:41:04.164601+0000 mon.vm02 (mon.0) 625 : audit [DBG] from='client.? 192.168.123.102:0/3862479504' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-06T23:41:04.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:04 vm02 bash[17013]: audit 2026-03-06T22:41:04.217033+0000 mon.vm07 (mon.1) 29 : audit [DBG] from='client.? 192.168.123.102:0/2634072255' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-06T23:41:04.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:04 vm02 bash[17013]: audit 2026-03-06T22:41:04.217033+0000 mon.vm07 (mon.1) 29 : audit [DBG] from='client.? 192.168.123.102:0/2634072255' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-06T23:41:04.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:04 vm07 bash[20848]: cluster 2026-03-06T22:41:02.760354+0000 mgr.vm02.opvwec (mgr.14199) 123 : cluster [DBG] pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:04.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:04 vm07 bash[20848]: cluster 2026-03-06T22:41:02.760354+0000 mgr.vm02.opvwec (mgr.14199) 123 : cluster [DBG] pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:04.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:04 vm07 bash[20848]: audit 2026-03-06T22:41:03.610429+0000 mon.vm07 (mon.1) 25 : audit [DBG] from='client.? 192.168.123.102:0/326758799' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-06T23:41:04.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:04 vm07 bash[20848]: audit 2026-03-06T22:41:03.610429+0000 mon.vm07 (mon.1) 25 : audit [DBG] from='client.? 192.168.123.102:0/326758799' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-06T23:41:04.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:04 vm07 bash[20848]: audit 2026-03-06T22:41:03.911975+0000 mon.vm07 (mon.1) 26 : audit [DBG] from='client.? 192.168.123.102:0/4098971467' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-06T23:41:04.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:04 vm07 bash[20848]: audit 2026-03-06T22:41:03.911975+0000 mon.vm07 (mon.1) 26 : audit [DBG] from='client.? 192.168.123.102:0/4098971467' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-06T23:41:04.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:04 vm07 bash[20848]: audit 2026-03-06T22:41:03.966684+0000 mon.vm02 (mon.0) 623 : audit [DBG] from='client.? 192.168.123.102:0/3019377510' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-06T23:41:04.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:04 vm07 bash[20848]: audit 2026-03-06T22:41:03.966684+0000 mon.vm02 (mon.0) 623 : audit [DBG] from='client.? 192.168.123.102:0/3019377510' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-06T23:41:04.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:04 vm07 bash[20848]: audit 2026-03-06T22:41:04.142590+0000 mon.vm07 (mon.1) 27 : audit [DBG] from='client.? 192.168.123.102:0/2117638262' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-06T23:41:04.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:04 vm07 bash[20848]: audit 2026-03-06T22:41:04.142590+0000 mon.vm07 (mon.1) 27 : audit [DBG] from='client.? 192.168.123.102:0/2117638262' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-06T23:41:04.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:04 vm07 bash[20848]: audit 2026-03-06T22:41:04.146595+0000 mon.vm07 (mon.1) 28 : audit [DBG] from='client.? 192.168.123.102:0/3202924947' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-06T23:41:04.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:04 vm07 bash[20848]: audit 2026-03-06T22:41:04.146595+0000 mon.vm07 (mon.1) 28 : audit [DBG] from='client.? 192.168.123.102:0/3202924947' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-06T23:41:04.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:04 vm07 bash[20848]: audit 2026-03-06T22:41:04.157456+0000 mon.vm02 (mon.0) 624 : audit [DBG] from='client.? 192.168.123.102:0/3986648851' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-06T23:41:04.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:04 vm07 bash[20848]: audit 2026-03-06T22:41:04.157456+0000 mon.vm02 (mon.0) 624 : audit [DBG] from='client.? 192.168.123.102:0/3986648851' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-06T23:41:04.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:04 vm07 bash[20848]: audit 2026-03-06T22:41:04.164601+0000 mon.vm02 (mon.0) 625 : audit [DBG] from='client.? 192.168.123.102:0/3862479504' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-06T23:41:04.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:04 vm07 bash[20848]: audit 2026-03-06T22:41:04.164601+0000 mon.vm02 (mon.0) 625 : audit [DBG] from='client.? 192.168.123.102:0/3862479504' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-06T23:41:04.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:04 vm07 bash[20848]: audit 2026-03-06T22:41:04.217033+0000 mon.vm07 (mon.1) 29 : audit [DBG] from='client.? 192.168.123.102:0/2634072255' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-06T23:41:04.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:04 vm07 bash[20848]: audit 2026-03-06T22:41:04.217033+0000 mon.vm07 (mon.1) 29 : audit [DBG] from='client.? 192.168.123.102:0/2634072255' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-06T23:41:06.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:06 vm07 bash[20848]: cluster 2026-03-06T22:41:04.760668+0000 mgr.vm02.opvwec (mgr.14199) 124 : cluster [DBG] pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:06.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:06 vm07 bash[20848]: cluster 2026-03-06T22:41:04.760668+0000 mgr.vm02.opvwec (mgr.14199) 124 : cluster [DBG] pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:06.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:06 vm07 bash[20848]: audit 2026-03-06T22:41:05.800442+0000 mon.vm02 (mon.0) 626 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:41:06.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:06 vm07 bash[20848]: audit 2026-03-06T22:41:05.800442+0000 mon.vm02 (mon.0) 626 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:41:06.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:06 vm02 bash[17013]: cluster 2026-03-06T22:41:04.760668+0000 mgr.vm02.opvwec (mgr.14199) 124 : cluster [DBG] pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:06.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:06 vm02 bash[17013]: cluster 2026-03-06T22:41:04.760668+0000 mgr.vm02.opvwec (mgr.14199) 124 : cluster [DBG] pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:06.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:06 vm02 bash[17013]: audit 2026-03-06T22:41:05.800442+0000 mon.vm02 (mon.0) 626 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:41:06.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:06 vm02 bash[17013]: audit 2026-03-06T22:41:05.800442+0000 mon.vm02 (mon.0) 626 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:41:08.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:08 vm07 bash[20848]: cluster 2026-03-06T22:41:06.760918+0000 mgr.vm02.opvwec (mgr.14199) 125 : cluster [DBG] pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:08.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:08 vm07 bash[20848]: cluster 2026-03-06T22:41:06.760918+0000 mgr.vm02.opvwec (mgr.14199) 125 : cluster [DBG] pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:08.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:08 vm02 bash[17013]: cluster 2026-03-06T22:41:06.760918+0000 mgr.vm02.opvwec (mgr.14199) 125 : cluster [DBG] pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:08.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:08 vm02 bash[17013]: cluster 2026-03-06T22:41:06.760918+0000 mgr.vm02.opvwec (mgr.14199) 125 : cluster [DBG] pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:09.250 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:41:09.596 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:41:09.596 INFO:teuthology.orchestra.run.vm02.stderr:dumped all 2026-03-06T23:41:09.655 INFO:teuthology.orchestra.run.vm02.stdout:{"pg_ready":true,"pg_map":{"version":81,"stamp":"2026-03-06T22:41:08.761063+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":3,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":3,"kb":167739392,"kb_used":1037440,"kb_used_data":3116,"kb_used_omap":12,"kb_used_meta":214515,"kb_avail":166701952,"statfs":{"total":171765137408,"available":170702798848,"internally_reserved":0,"allocated":3190784,"data_stored":2031848,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":12712,"internal_metadata":219663960},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"12.001674"},"pg_stats":[{"pgid":"1.0","version":"23'32","reported_seq":58,"reported_epoch":24,"state":"active+clean","last_fresh":"2026-03-06T22:40:11.838714+0000","last_change":"2026-03-06T22:40:11.044457+0000","last_active":"2026-03-06T22:40:11.838714+0000","last_peered":"2026-03-06T22:40:11.838714+0000","last_clean":"2026-03-06T22:40:11.838714+0000","last_became_active":"2026-03-06T22:40:11.044325+0000","last_became_peered":"2026-03-06T22:40:11.044325+0000","last_unstale":"2026-03-06T22:40:11.838714+0000","last_undegraded":"2026-03-06T22:40:11.838714+0000","last_fullsized":"2026-03-06T22:40:11.838714+0000","mapping_epoch":22,"log_start":"0'0","ondisk_log_start":"0'0","created":20,"last_epoch_clean":23,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-06T22:40:07.410293+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-06T22:40:07.410293+0000","last_clean_scrub_stamp":"2026-03-06T22:40:07.410293+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-08T07:40:11.014252+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,7,2],"acting":[6,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":7,"up_from":22,"seq":94489280525,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":437244,"kb_used_data":672,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20530180,"statfs":{"total":21470642176,"available":21022904320,"internally_reserved":0,"allocated":688128,"data_stored":541031,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1585,"internal_metadata":27457999},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":22,"seq":94489280525,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":436664,"kb_used_data":220,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20530760,"statfs":{"total":21470642176,"available":21023498240,"internally_reserved":0,"allocated":225280,"data_stored":81751,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":6,"up_from":21,"seq":90194313229,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27644,"kb_used_data":672,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939780,"statfs":{"total":21470642176,"available":21442334720,"internally_reserved":0,"allocated":688128,"data_stored":541031,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":5,"up_from":20,"seq":85899345934,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27060,"kb_used_data":220,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940364,"statfs":{"total":21470642176,"available":21442932736,"internally_reserved":0,"allocated":225280,"data_stored":81751,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":3,"up_from":19,"seq":81604378638,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27064,"kb_used_data":220,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940360,"statfs":{"total":21470642176,"available":21442928640,"internally_reserved":0,"allocated":225280,"data_stored":81751,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1588,"internal_metadata":27457996},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":19,"seq":81604378638,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27640,"kb_used_data":672,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939784,"statfs":{"total":21470642176,"available":21442338816,"internally_reserved":0,"allocated":688128,"data_stored":541031,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":17,"seq":73014444046,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27060,"kb_used_data":220,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940364,"statfs":{"total":21470642176,"available":21442932736,"internally_reserved":0,"allocated":225280,"data_stored":81751,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":16,"seq":68719476751,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27064,"kb_used_data":220,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940360,"statfs":{"total":21470642176,"available":21442928640,"internally_reserved":0,"allocated":225280,"data_stored":81751,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-06T23:41:09.655 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph pg dump --format=json 2026-03-06T23:41:10.645 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:10 vm02 bash[17013]: cluster 2026-03-06T22:41:08.761178+0000 mgr.vm02.opvwec (mgr.14199) 126 : cluster [DBG] pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:10.645 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:10 vm02 bash[17013]: cluster 2026-03-06T22:41:08.761178+0000 mgr.vm02.opvwec (mgr.14199) 126 : cluster [DBG] pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:10.645 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:10 vm02 bash[17013]: audit 2026-03-06T22:41:09.590763+0000 mgr.vm02.opvwec (mgr.14199) 127 : audit [DBG] from='client.14434 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:41:10.645 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:10 vm02 bash[17013]: audit 2026-03-06T22:41:09.590763+0000 mgr.vm02.opvwec (mgr.14199) 127 : audit [DBG] from='client.14434 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:41:10.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:10 vm07 bash[20848]: cluster 2026-03-06T22:41:08.761178+0000 mgr.vm02.opvwec (mgr.14199) 126 : cluster [DBG] pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:10.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:10 vm07 bash[20848]: cluster 2026-03-06T22:41:08.761178+0000 mgr.vm02.opvwec (mgr.14199) 126 : cluster [DBG] pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:10.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:10 vm07 bash[20848]: audit 2026-03-06T22:41:09.590763+0000 mgr.vm02.opvwec (mgr.14199) 127 : audit [DBG] from='client.14434 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:41:10.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:10 vm07 bash[20848]: audit 2026-03-06T22:41:09.590763+0000 mgr.vm02.opvwec (mgr.14199) 127 : audit [DBG] from='client.14434 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:41:12.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:12 vm07 bash[20848]: cluster 2026-03-06T22:41:10.761418+0000 mgr.vm02.opvwec (mgr.14199) 128 : cluster [DBG] pgmap v82: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:12.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:12 vm07 bash[20848]: cluster 2026-03-06T22:41:10.761418+0000 mgr.vm02.opvwec (mgr.14199) 128 : cluster [DBG] pgmap v82: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:12.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:12 vm02 bash[17013]: cluster 2026-03-06T22:41:10.761418+0000 mgr.vm02.opvwec (mgr.14199) 128 : cluster [DBG] pgmap v82: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:12.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:12 vm02 bash[17013]: cluster 2026-03-06T22:41:10.761418+0000 mgr.vm02.opvwec (mgr.14199) 128 : cluster [DBG] pgmap v82: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:14.469 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:41:14.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:14 vm02 bash[17013]: cluster 2026-03-06T22:41:12.761736+0000 mgr.vm02.opvwec (mgr.14199) 129 : cluster [DBG] pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:14.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:14 vm02 bash[17013]: cluster 2026-03-06T22:41:12.761736+0000 mgr.vm02.opvwec (mgr.14199) 129 : cluster [DBG] pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:14.827 INFO:teuthology.orchestra.run.vm02.stderr:dumped all 2026-03-06T23:41:14.827 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:41:14.886 INFO:teuthology.orchestra.run.vm02.stdout:{"pg_ready":true,"pg_map":{"version":84,"stamp":"2026-03-06T22:41:14.761896+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":3,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":3,"kb":167739392,"kb_used":1037440,"kb_used_data":3116,"kb_used_omap":12,"kb_used_meta":214515,"kb_avail":166701952,"statfs":{"total":171765137408,"available":170702798848,"internally_reserved":0,"allocated":3190784,"data_stored":2031848,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":12712,"internal_metadata":219663960},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"12.001668"},"pg_stats":[{"pgid":"1.0","version":"23'32","reported_seq":58,"reported_epoch":24,"state":"active+clean","last_fresh":"2026-03-06T22:40:11.838714+0000","last_change":"2026-03-06T22:40:11.044457+0000","last_active":"2026-03-06T22:40:11.838714+0000","last_peered":"2026-03-06T22:40:11.838714+0000","last_clean":"2026-03-06T22:40:11.838714+0000","last_became_active":"2026-03-06T22:40:11.044325+0000","last_became_peered":"2026-03-06T22:40:11.044325+0000","last_unstale":"2026-03-06T22:40:11.838714+0000","last_undegraded":"2026-03-06T22:40:11.838714+0000","last_fullsized":"2026-03-06T22:40:11.838714+0000","mapping_epoch":22,"log_start":"0'0","ondisk_log_start":"0'0","created":20,"last_epoch_clean":23,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-06T22:40:07.410293+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-06T22:40:07.410293+0000","last_clean_scrub_stamp":"2026-03-06T22:40:07.410293+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-08T07:40:11.014252+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,7,2],"acting":[6,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":7,"up_from":22,"seq":94489280526,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":437244,"kb_used_data":672,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20530180,"statfs":{"total":21470642176,"available":21022904320,"internally_reserved":0,"allocated":688128,"data_stored":541031,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1585,"internal_metadata":27457999},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":22,"seq":94489280526,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":436664,"kb_used_data":220,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20530760,"statfs":{"total":21470642176,"available":21023498240,"internally_reserved":0,"allocated":225280,"data_stored":81751,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":6,"up_from":21,"seq":90194313231,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27644,"kb_used_data":672,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939780,"statfs":{"total":21470642176,"available":21442334720,"internally_reserved":0,"allocated":688128,"data_stored":541031,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":5,"up_from":20,"seq":85899345935,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27060,"kb_used_data":220,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940364,"statfs":{"total":21470642176,"available":21442932736,"internally_reserved":0,"allocated":225280,"data_stored":81751,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":3,"up_from":19,"seq":81604378639,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27064,"kb_used_data":220,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940360,"statfs":{"total":21470642176,"available":21442928640,"internally_reserved":0,"allocated":225280,"data_stored":81751,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1588,"internal_metadata":27457996},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":19,"seq":81604378639,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27640,"kb_used_data":672,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939784,"statfs":{"total":21470642176,"available":21442338816,"internally_reserved":0,"allocated":688128,"data_stored":541031,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":17,"seq":73014444047,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27060,"kb_used_data":220,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940364,"statfs":{"total":21470642176,"available":21442932736,"internally_reserved":0,"allocated":225280,"data_stored":81751,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":16,"seq":68719476752,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27064,"kb_used_data":220,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940360,"statfs":{"total":21470642176,"available":21442928640,"internally_reserved":0,"allocated":225280,"data_stored":81751,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-06T23:41:14.886 INFO:tasks.cephadm.ceph_manager.ceph:clean! 2026-03-06T23:41:14.886 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-06T23:41:14.886 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy 2026-03-06T23:41:14.886 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph health --format=json 2026-03-06T23:41:14.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:14 vm07 bash[20848]: cluster 2026-03-06T22:41:12.761736+0000 mgr.vm02.opvwec (mgr.14199) 129 : cluster [DBG] pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:14.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:14 vm07 bash[20848]: cluster 2026-03-06T22:41:12.761736+0000 mgr.vm02.opvwec (mgr.14199) 129 : cluster [DBG] pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:16.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:16 vm02 bash[17013]: cluster 2026-03-06T22:41:14.762014+0000 mgr.vm02.opvwec (mgr.14199) 130 : cluster [DBG] pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:16.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:16 vm02 bash[17013]: cluster 2026-03-06T22:41:14.762014+0000 mgr.vm02.opvwec (mgr.14199) 130 : cluster [DBG] pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:16.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:16 vm02 bash[17013]: audit 2026-03-06T22:41:14.821801+0000 mgr.vm02.opvwec (mgr.14199) 131 : audit [DBG] from='client.14438 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:41:16.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:16 vm02 bash[17013]: audit 2026-03-06T22:41:14.821801+0000 mgr.vm02.opvwec (mgr.14199) 131 : audit [DBG] from='client.14438 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:41:16.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:16 vm07 bash[20848]: cluster 2026-03-06T22:41:14.762014+0000 mgr.vm02.opvwec (mgr.14199) 130 : cluster [DBG] pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:16.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:16 vm07 bash[20848]: cluster 2026-03-06T22:41:14.762014+0000 mgr.vm02.opvwec (mgr.14199) 130 : cluster [DBG] pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:16.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:16 vm07 bash[20848]: audit 2026-03-06T22:41:14.821801+0000 mgr.vm02.opvwec (mgr.14199) 131 : audit [DBG] from='client.14438 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:41:16.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:16 vm07 bash[20848]: audit 2026-03-06T22:41:14.821801+0000 mgr.vm02.opvwec (mgr.14199) 131 : audit [DBG] from='client.14438 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:41:18.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:18 vm02 bash[17013]: cluster 2026-03-06T22:41:16.762230+0000 mgr.vm02.opvwec (mgr.14199) 132 : cluster [DBG] pgmap v85: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:18.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:18 vm02 bash[17013]: cluster 2026-03-06T22:41:16.762230+0000 mgr.vm02.opvwec (mgr.14199) 132 : cluster [DBG] pgmap v85: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:18.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:18 vm07 bash[20848]: cluster 2026-03-06T22:41:16.762230+0000 mgr.vm02.opvwec (mgr.14199) 132 : cluster [DBG] pgmap v85: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:18.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:18 vm07 bash[20848]: cluster 2026-03-06T22:41:16.762230+0000 mgr.vm02.opvwec (mgr.14199) 132 : cluster [DBG] pgmap v85: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:19.683 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:41:20.046 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:41:20.046 INFO:teuthology.orchestra.run.vm02.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-06T23:41:20.102 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy done 2026-03-06T23:41:20.103 INFO:tasks.cephadm:Setup complete, yielding 2026-03-06T23:41:20.103 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-06T23:41:20.104 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm02.local 2026-03-06T23:41:20.105 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- bash -c 'ceph orch status' 2026-03-06T23:41:20.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:20 vm07 bash[20848]: cluster 2026-03-06T22:41:18.762440+0000 mgr.vm02.opvwec (mgr.14199) 133 : cluster [DBG] pgmap v86: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:20.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:20 vm07 bash[20848]: cluster 2026-03-06T22:41:18.762440+0000 mgr.vm02.opvwec (mgr.14199) 133 : cluster [DBG] pgmap v86: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:20.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:20 vm07 bash[20848]: audit 2026-03-06T22:41:20.041598+0000 mon.vm02 (mon.0) 627 : audit [DBG] from='client.? 192.168.123.102:0/2665104863' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-06T23:41:20.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:20 vm07 bash[20848]: audit 2026-03-06T22:41:20.041598+0000 mon.vm02 (mon.0) 627 : audit [DBG] from='client.? 192.168.123.102:0/2665104863' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-06T23:41:20.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:20 vm02 bash[17013]: cluster 2026-03-06T22:41:18.762440+0000 mgr.vm02.opvwec (mgr.14199) 133 : cluster [DBG] pgmap v86: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:20.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:20 vm02 bash[17013]: cluster 2026-03-06T22:41:18.762440+0000 mgr.vm02.opvwec (mgr.14199) 133 : cluster [DBG] pgmap v86: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:20.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:20 vm02 bash[17013]: audit 2026-03-06T22:41:20.041598+0000 mon.vm02 (mon.0) 627 : audit [DBG] from='client.? 192.168.123.102:0/2665104863' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-06T23:41:20.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:20 vm02 bash[17013]: audit 2026-03-06T22:41:20.041598+0000 mon.vm02 (mon.0) 627 : audit [DBG] from='client.? 192.168.123.102:0/2665104863' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-06T23:41:21.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:21 vm07 bash[20848]: audit 2026-03-06T22:41:20.800639+0000 mon.vm02 (mon.0) 628 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:41:21.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:21 vm07 bash[20848]: audit 2026-03-06T22:41:20.800639+0000 mon.vm02 (mon.0) 628 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:41:21.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:21 vm02 bash[17013]: audit 2026-03-06T22:41:20.800639+0000 mon.vm02 (mon.0) 628 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:41:21.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:21 vm02 bash[17013]: audit 2026-03-06T22:41:20.800639+0000 mon.vm02 (mon.0) 628 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:41:22.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:22 vm07 bash[20848]: cluster 2026-03-06T22:41:20.762678+0000 mgr.vm02.opvwec (mgr.14199) 134 : cluster [DBG] pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:22.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:22 vm07 bash[20848]: cluster 2026-03-06T22:41:20.762678+0000 mgr.vm02.opvwec (mgr.14199) 134 : cluster [DBG] pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:22.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:22 vm07 bash[20848]: audit 2026-03-06T22:41:22.095356+0000 mon.vm02 (mon.0) 629 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:41:22.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:22 vm07 bash[20848]: audit 2026-03-06T22:41:22.095356+0000 mon.vm02 (mon.0) 629 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:41:22.992 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:22 vm02 bash[17013]: cluster 2026-03-06T22:41:20.762678+0000 mgr.vm02.opvwec (mgr.14199) 134 : cluster [DBG] pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:22.992 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:22 vm02 bash[17013]: cluster 2026-03-06T22:41:20.762678+0000 mgr.vm02.opvwec (mgr.14199) 134 : cluster [DBG] pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:22.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:22 vm02 bash[17013]: audit 2026-03-06T22:41:22.095356+0000 mon.vm02 (mon.0) 629 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:41:22.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:22 vm02 bash[17013]: audit 2026-03-06T22:41:22.095356+0000 mon.vm02 (mon.0) 629 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:41:24.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:24 vm02 bash[17013]: cluster 2026-03-06T22:41:22.762924+0000 mgr.vm02.opvwec (mgr.14199) 135 : cluster [DBG] pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:24.743 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:24 vm02 bash[17013]: cluster 2026-03-06T22:41:22.762924+0000 mgr.vm02.opvwec (mgr.14199) 135 : cluster [DBG] pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:24.881 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:41:24.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:24 vm07 bash[20848]: cluster 2026-03-06T22:41:22.762924+0000 mgr.vm02.opvwec (mgr.14199) 135 : cluster [DBG] pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:24.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:24 vm07 bash[20848]: cluster 2026-03-06T22:41:22.762924+0000 mgr.vm02.opvwec (mgr.14199) 135 : cluster [DBG] pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:25.220 INFO:teuthology.orchestra.run.vm02.stdout:Backend: cephadm 2026-03-06T23:41:25.220 INFO:teuthology.orchestra.run.vm02.stdout:Available: Yes 2026-03-06T23:41:25.220 INFO:teuthology.orchestra.run.vm02.stdout:Paused: No 2026-03-06T23:41:25.284 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- bash -c 'ceph orch ps' 2026-03-06T23:41:26.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:26 vm07 bash[20848]: cluster 2026-03-06T22:41:24.763184+0000 mgr.vm02.opvwec (mgr.14199) 136 : cluster [DBG] pgmap v89: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:26.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:26 vm07 bash[20848]: cluster 2026-03-06T22:41:24.763184+0000 mgr.vm02.opvwec (mgr.14199) 136 : cluster [DBG] pgmap v89: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:26.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:26 vm07 bash[20848]: audit 2026-03-06T22:41:25.215395+0000 mgr.vm02.opvwec (mgr.14199) 137 : audit [DBG] from='client.14446 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:41:26.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:26 vm07 bash[20848]: audit 2026-03-06T22:41:25.215395+0000 mgr.vm02.opvwec (mgr.14199) 137 : audit [DBG] from='client.14446 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:41:26.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:26 vm02 bash[17013]: cluster 2026-03-06T22:41:24.763184+0000 mgr.vm02.opvwec (mgr.14199) 136 : cluster [DBG] pgmap v89: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:26.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:26 vm02 bash[17013]: cluster 2026-03-06T22:41:24.763184+0000 mgr.vm02.opvwec (mgr.14199) 136 : cluster [DBG] pgmap v89: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:26.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:26 vm02 bash[17013]: audit 2026-03-06T22:41:25.215395+0000 mgr.vm02.opvwec (mgr.14199) 137 : audit [DBG] from='client.14446 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:41:26.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:26 vm02 bash[17013]: audit 2026-03-06T22:41:25.215395+0000 mgr.vm02.opvwec (mgr.14199) 137 : audit [DBG] from='client.14446 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:41:28.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:28 vm07 bash[20848]: cluster 2026-03-06T22:41:26.763407+0000 mgr.vm02.opvwec (mgr.14199) 138 : cluster [DBG] pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:28.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:28 vm07 bash[20848]: cluster 2026-03-06T22:41:26.763407+0000 mgr.vm02.opvwec (mgr.14199) 138 : cluster [DBG] pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:28.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:28 vm07 bash[20848]: audit 2026-03-06T22:41:27.127504+0000 mon.vm02 (mon.0) 630 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:28.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:28 vm07 bash[20848]: audit 2026-03-06T22:41:27.127504+0000 mon.vm02 (mon.0) 630 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:28.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:28 vm07 bash[20848]: audit 2026-03-06T22:41:27.131860+0000 mon.vm02 (mon.0) 631 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:28.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:28 vm07 bash[20848]: audit 2026-03-06T22:41:27.131860+0000 mon.vm02 (mon.0) 631 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:28.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:28 vm07 bash[20848]: audit 2026-03-06T22:41:27.486174+0000 mon.vm02 (mon.0) 632 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:28.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:28 vm07 bash[20848]: audit 2026-03-06T22:41:27.486174+0000 mon.vm02 (mon.0) 632 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:28.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:28 vm07 bash[20848]: audit 2026-03-06T22:41:27.491038+0000 mon.vm02 (mon.0) 633 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:28.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:28 vm07 bash[20848]: audit 2026-03-06T22:41:27.491038+0000 mon.vm02 (mon.0) 633 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:28.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:28 vm07 bash[20848]: audit 2026-03-06T22:41:27.793429+0000 mon.vm02 (mon.0) 634 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:41:28.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:28 vm07 bash[20848]: audit 2026-03-06T22:41:27.793429+0000 mon.vm02 (mon.0) 634 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:41:28.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:28 vm07 bash[20848]: audit 2026-03-06T22:41:27.793896+0000 mon.vm02 (mon.0) 635 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T23:41:28.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:28 vm07 bash[20848]: audit 2026-03-06T22:41:27.793896+0000 mon.vm02 (mon.0) 635 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T23:41:28.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:28 vm07 bash[20848]: audit 2026-03-06T22:41:27.798076+0000 mon.vm02 (mon.0) 636 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:28.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:28 vm07 bash[20848]: audit 2026-03-06T22:41:27.798076+0000 mon.vm02 (mon.0) 636 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:28.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:28 vm07 bash[20848]: audit 2026-03-06T22:41:27.799402+0000 mon.vm02 (mon.0) 637 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-06T23:41:28.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:28 vm07 bash[20848]: audit 2026-03-06T22:41:27.799402+0000 mon.vm02 (mon.0) 637 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-06T23:41:28.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:28 vm02 bash[17013]: cluster 2026-03-06T22:41:26.763407+0000 mgr.vm02.opvwec (mgr.14199) 138 : cluster [DBG] pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:28.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:28 vm02 bash[17013]: cluster 2026-03-06T22:41:26.763407+0000 mgr.vm02.opvwec (mgr.14199) 138 : cluster [DBG] pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:28.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:28 vm02 bash[17013]: audit 2026-03-06T22:41:27.127504+0000 mon.vm02 (mon.0) 630 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:28.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:28 vm02 bash[17013]: audit 2026-03-06T22:41:27.127504+0000 mon.vm02 (mon.0) 630 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:28.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:28 vm02 bash[17013]: audit 2026-03-06T22:41:27.131860+0000 mon.vm02 (mon.0) 631 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:28.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:28 vm02 bash[17013]: audit 2026-03-06T22:41:27.131860+0000 mon.vm02 (mon.0) 631 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:28.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:28 vm02 bash[17013]: audit 2026-03-06T22:41:27.486174+0000 mon.vm02 (mon.0) 632 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:28.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:28 vm02 bash[17013]: audit 2026-03-06T22:41:27.486174+0000 mon.vm02 (mon.0) 632 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:28.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:28 vm02 bash[17013]: audit 2026-03-06T22:41:27.491038+0000 mon.vm02 (mon.0) 633 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:28.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:28 vm02 bash[17013]: audit 2026-03-06T22:41:27.491038+0000 mon.vm02 (mon.0) 633 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:28.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:28 vm02 bash[17013]: audit 2026-03-06T22:41:27.793429+0000 mon.vm02 (mon.0) 634 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:41:28.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:28 vm02 bash[17013]: audit 2026-03-06T22:41:27.793429+0000 mon.vm02 (mon.0) 634 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:41:28.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:28 vm02 bash[17013]: audit 2026-03-06T22:41:27.793896+0000 mon.vm02 (mon.0) 635 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T23:41:28.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:28 vm02 bash[17013]: audit 2026-03-06T22:41:27.793896+0000 mon.vm02 (mon.0) 635 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T23:41:28.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:28 vm02 bash[17013]: audit 2026-03-06T22:41:27.798076+0000 mon.vm02 (mon.0) 636 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:28.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:28 vm02 bash[17013]: audit 2026-03-06T22:41:27.798076+0000 mon.vm02 (mon.0) 636 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:28.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:28 vm02 bash[17013]: audit 2026-03-06T22:41:27.799402+0000 mon.vm02 (mon.0) 637 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-06T23:41:28.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:28 vm02 bash[17013]: audit 2026-03-06T22:41:27.799402+0000 mon.vm02 (mon.0) 637 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-06T23:41:29.932 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:41:30.242 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:30 vm02 bash[17013]: cluster 2026-03-06T22:41:28.763638+0000 mgr.vm02.opvwec (mgr.14199) 139 : cluster [DBG] pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:30.242 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:30 vm02 bash[17013]: cluster 2026-03-06T22:41:28.763638+0000 mgr.vm02.opvwec (mgr.14199) 139 : cluster [DBG] pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:30.291 INFO:teuthology.orchestra.run.vm02.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-06T23:41:30.291 INFO:teuthology.orchestra.run.vm02.stdout:alertmanager.vm02 vm02 *:9093,9094 running (2m) 2s ago 3m 14.8M - 0.25.0 c8568f914cd2 84648da077e5 2026-03-06T23:41:30.291 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm02 vm02 running (3m) 2s ago 3m 8939k - 19.2.3-39-g340d3c24fc6 8bccc98d839a f31669dc6a31 2026-03-06T23:41:30.291 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm07 vm07 running (2m) 3s ago 2m 6104k - 19.2.3-39-g340d3c24fc6 8bccc98d839a 15fc838c5569 2026-03-06T23:41:30.291 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm02 vm02 running (3m) 2s ago 3m 10.7M - 19.2.3-39-g340d3c24fc6 8bccc98d839a 5a8fefd8c4c1 2026-03-06T23:41:30.291 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm07 vm07 running (2m) 3s ago 2m 10.7M - 19.2.3-39-g340d3c24fc6 8bccc98d839a 1a8b1b8fe066 2026-03-06T23:41:30.291 INFO:teuthology.orchestra.run.vm02.stdout:grafana.vm02 vm02 *:3000 running (2m) 2s ago 2m 67.5M - 10.4.0 c8b91775d855 b526f2d6f4e9 2026-03-06T23:41:30.291 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm02.opvwec vm02 *:9283,8765,8443 running (4m) 2s ago 4m 526M - 19.2.3-39-g340d3c24fc6 8bccc98d839a b47eb74d1963 2026-03-06T23:41:30.291 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm07.jbleen vm07 *:8443,9283,8765 running (2m) 3s ago 2m 471M - 19.2.3-39-g340d3c24fc6 8bccc98d839a de40c7b40128 2026-03-06T23:41:30.291 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm02 vm02 running (4m) 2s ago 4m 48.3M 2048M 19.2.3-39-g340d3c24fc6 8bccc98d839a c6a67e710759 2026-03-06T23:41:30.291 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm07 vm07 running (2m) 3s ago 2m 42.1M 2048M 19.2.3-39-g340d3c24fc6 8bccc98d839a d04715d4bcf4 2026-03-06T23:41:30.291 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm02 vm02 *:9100 running (3m) 2s ago 3m 7648k - 1.7.0 72c9c2088986 8881e16001aa 2026-03-06T23:41:30.291 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm07 vm07 *:9100 running (2m) 3s ago 2m 7572k - 1.7.0 72c9c2088986 ebec8a2531f9 2026-03-06T23:41:30.291 INFO:teuthology.orchestra.run.vm02.stdout:osd.0 vm07 running (91s) 3s ago 94s 55.8M 4096M 19.2.3-39-g340d3c24fc6 8bccc98d839a fdee2703d717 2026-03-06T23:41:30.291 INFO:teuthology.orchestra.run.vm02.stdout:osd.1 vm02 running (91s) 2s ago 94s 56.6M 4096M 19.2.3-39-g340d3c24fc6 8bccc98d839a a3d88ef28af4 2026-03-06T23:41:30.291 INFO:teuthology.orchestra.run.vm02.stdout:osd.2 vm07 running (89s) 3s ago 92s 59.0M 4096M 19.2.3-39-g340d3c24fc6 8bccc98d839a 38338d41322a 2026-03-06T23:41:30.291 INFO:teuthology.orchestra.run.vm02.stdout:osd.3 vm02 running (88s) 2s ago 93s 36.0M 4096M 19.2.3-39-g340d3c24fc6 8bccc98d839a cf70c9c0e386 2026-03-06T23:41:30.291 INFO:teuthology.orchestra.run.vm02.stdout:osd.4 vm02 running (87s) 2s ago 91s 57.8M 4096M 19.2.3-39-g340d3c24fc6 8bccc98d839a bc8c49c1ec9a 2026-03-06T23:41:30.291 INFO:teuthology.orchestra.run.vm02.stdout:osd.5 vm07 running (87s) 3s ago 90s 34.0M 4096M 19.2.3-39-g340d3c24fc6 8bccc98d839a de138beeb700 2026-03-06T23:41:30.291 INFO:teuthology.orchestra.run.vm02.stdout:osd.6 vm07 running (86s) 3s ago 88s 37.0M 4096M 19.2.3-39-g340d3c24fc6 8bccc98d839a 4cc4e2c2032e 2026-03-06T23:41:30.291 INFO:teuthology.orchestra.run.vm02.stdout:osd.7 vm02 running (85s) 2s ago 88s 58.5M 4096M 19.2.3-39-g340d3c24fc6 8bccc98d839a 00712ed7c41f 2026-03-06T23:41:30.291 INFO:teuthology.orchestra.run.vm02.stdout:prometheus.vm02 vm02 *:9095 running (2m) 2s ago 2m 35.1M - 2.51.0 1d3b7f56885b 8e10e7c97737 2026-03-06T23:41:30.354 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- bash -c 'ceph orch ls' 2026-03-06T23:41:30.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:30 vm07 bash[20848]: cluster 2026-03-06T22:41:28.763638+0000 mgr.vm02.opvwec (mgr.14199) 139 : cluster [DBG] pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:30.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:30 vm07 bash[20848]: cluster 2026-03-06T22:41:28.763638+0000 mgr.vm02.opvwec (mgr.14199) 139 : cluster [DBG] pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:31.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:31 vm02 bash[17013]: audit 2026-03-06T22:41:30.281817+0000 mgr.vm02.opvwec (mgr.14199) 140 : audit [DBG] from='client.14450 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:41:31.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:31 vm02 bash[17013]: audit 2026-03-06T22:41:30.281817+0000 mgr.vm02.opvwec (mgr.14199) 140 : audit [DBG] from='client.14450 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:41:31.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:31 vm07 bash[20848]: audit 2026-03-06T22:41:30.281817+0000 mgr.vm02.opvwec (mgr.14199) 140 : audit [DBG] from='client.14450 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:41:31.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:31 vm07 bash[20848]: audit 2026-03-06T22:41:30.281817+0000 mgr.vm02.opvwec (mgr.14199) 140 : audit [DBG] from='client.14450 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:41:32.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:32 vm07 bash[20848]: cluster 2026-03-06T22:41:30.763875+0000 mgr.vm02.opvwec (mgr.14199) 141 : cluster [DBG] pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:32.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:32 vm07 bash[20848]: cluster 2026-03-06T22:41:30.763875+0000 mgr.vm02.opvwec (mgr.14199) 141 : cluster [DBG] pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:32.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:32 vm02 bash[17013]: cluster 2026-03-06T22:41:30.763875+0000 mgr.vm02.opvwec (mgr.14199) 141 : cluster [DBG] pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:32.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:32 vm02 bash[17013]: cluster 2026-03-06T22:41:30.763875+0000 mgr.vm02.opvwec (mgr.14199) 141 : cluster [DBG] pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:34.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:34 vm07 bash[20848]: cluster 2026-03-06T22:41:32.764151+0000 mgr.vm02.opvwec (mgr.14199) 142 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:34.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:34 vm07 bash[20848]: cluster 2026-03-06T22:41:32.764151+0000 mgr.vm02.opvwec (mgr.14199) 142 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:34.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:34 vm02 bash[17013]: cluster 2026-03-06T22:41:32.764151+0000 mgr.vm02.opvwec (mgr.14199) 142 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:34.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:34 vm02 bash[17013]: cluster 2026-03-06T22:41:32.764151+0000 mgr.vm02.opvwec (mgr.14199) 142 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:35.131 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:41:35.480 INFO:teuthology.orchestra.run.vm02.stdout:NAME PORTS RUNNING REFRESHED AGE PLACEMENT 2026-03-06T23:41:35.480 INFO:teuthology.orchestra.run.vm02.stdout:alertmanager ?:9093,9094 1/1 7s ago 3m count:1 2026-03-06T23:41:35.480 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter 2/2 8s ago 3m * 2026-03-06T23:41:35.480 INFO:teuthology.orchestra.run.vm02.stdout:crash 2/2 8s ago 3m * 2026-03-06T23:41:35.480 INFO:teuthology.orchestra.run.vm02.stdout:grafana ?:3000 1/1 7s ago 3m count:1 2026-03-06T23:41:35.480 INFO:teuthology.orchestra.run.vm02.stdout:mgr 2/2 8s ago 3m count:2 2026-03-06T23:41:35.480 INFO:teuthology.orchestra.run.vm02.stdout:mon 2/2 8s ago 2m vm02:192.168.123.102=vm02;vm07:192.168.123.107=vm07;count:2 2026-03-06T23:41:35.480 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter ?:9100 2/2 8s ago 3m * 2026-03-06T23:41:35.480 INFO:teuthology.orchestra.run.vm02.stdout:osd.all-available-devices 8 8s ago 2m * 2026-03-06T23:41:35.480 INFO:teuthology.orchestra.run.vm02.stdout:prometheus ?:9095 1/1 7s ago 3m count:1 2026-03-06T23:41:35.538 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- bash -c 'ceph orch host ls' 2026-03-06T23:41:36.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:36 vm07 bash[20848]: cluster 2026-03-06T22:41:34.764419+0000 mgr.vm02.opvwec (mgr.14199) 143 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:36.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:36 vm07 bash[20848]: cluster 2026-03-06T22:41:34.764419+0000 mgr.vm02.opvwec (mgr.14199) 143 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:36.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:36 vm07 bash[20848]: audit 2026-03-06T22:41:35.473022+0000 mgr.vm02.opvwec (mgr.14199) 144 : audit [DBG] from='client.14454 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:41:36.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:36 vm07 bash[20848]: audit 2026-03-06T22:41:35.473022+0000 mgr.vm02.opvwec (mgr.14199) 144 : audit [DBG] from='client.14454 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:41:36.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:36 vm07 bash[20848]: audit 2026-03-06T22:41:35.800839+0000 mon.vm02 (mon.0) 638 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:41:36.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:36 vm07 bash[20848]: audit 2026-03-06T22:41:35.800839+0000 mon.vm02 (mon.0) 638 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:41:36.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:36 vm02 bash[17013]: cluster 2026-03-06T22:41:34.764419+0000 mgr.vm02.opvwec (mgr.14199) 143 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:36.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:36 vm02 bash[17013]: cluster 2026-03-06T22:41:34.764419+0000 mgr.vm02.opvwec (mgr.14199) 143 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:36.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:36 vm02 bash[17013]: audit 2026-03-06T22:41:35.473022+0000 mgr.vm02.opvwec (mgr.14199) 144 : audit [DBG] from='client.14454 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:41:36.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:36 vm02 bash[17013]: audit 2026-03-06T22:41:35.473022+0000 mgr.vm02.opvwec (mgr.14199) 144 : audit [DBG] from='client.14454 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:41:36.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:36 vm02 bash[17013]: audit 2026-03-06T22:41:35.800839+0000 mon.vm02 (mon.0) 638 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:41:36.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:36 vm02 bash[17013]: audit 2026-03-06T22:41:35.800839+0000 mon.vm02 (mon.0) 638 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:41:38.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:38 vm07 bash[20848]: cluster 2026-03-06T22:41:36.764640+0000 mgr.vm02.opvwec (mgr.14199) 145 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:38.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:38 vm07 bash[20848]: cluster 2026-03-06T22:41:36.764640+0000 mgr.vm02.opvwec (mgr.14199) 145 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:38.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:38 vm02 bash[17013]: cluster 2026-03-06T22:41:36.764640+0000 mgr.vm02.opvwec (mgr.14199) 145 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:38.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:38 vm02 bash[17013]: cluster 2026-03-06T22:41:36.764640+0000 mgr.vm02.opvwec (mgr.14199) 145 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:40.305 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:41:40.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:40 vm07 bash[20848]: cluster 2026-03-06T22:41:38.764931+0000 mgr.vm02.opvwec (mgr.14199) 146 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:40.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:40 vm07 bash[20848]: cluster 2026-03-06T22:41:38.764931+0000 mgr.vm02.opvwec (mgr.14199) 146 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:40.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:40 vm02 bash[17013]: cluster 2026-03-06T22:41:38.764931+0000 mgr.vm02.opvwec (mgr.14199) 146 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:40.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:40 vm02 bash[17013]: cluster 2026-03-06T22:41:38.764931+0000 mgr.vm02.opvwec (mgr.14199) 146 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:40.656 INFO:teuthology.orchestra.run.vm02.stdout:HOST ADDR LABELS STATUS 2026-03-06T23:41:40.656 INFO:teuthology.orchestra.run.vm02.stdout:vm02 192.168.123.102 2026-03-06T23:41:40.656 INFO:teuthology.orchestra.run.vm02.stdout:vm07 192.168.123.107 2026-03-06T23:41:40.656 INFO:teuthology.orchestra.run.vm02.stdout:2 hosts in cluster 2026-03-06T23:41:40.720 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- bash -c 'ceph orch device ls' 2026-03-06T23:41:41.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:41 vm07 bash[20848]: audit 2026-03-06T22:41:40.651344+0000 mgr.vm02.opvwec (mgr.14199) 147 : audit [DBG] from='client.14458 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:41:41.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:41 vm07 bash[20848]: audit 2026-03-06T22:41:40.651344+0000 mgr.vm02.opvwec (mgr.14199) 147 : audit [DBG] from='client.14458 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:41:41.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:41 vm02 bash[17013]: audit 2026-03-06T22:41:40.651344+0000 mgr.vm02.opvwec (mgr.14199) 147 : audit [DBG] from='client.14458 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:41:41.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:41 vm02 bash[17013]: audit 2026-03-06T22:41:40.651344+0000 mgr.vm02.opvwec (mgr.14199) 147 : audit [DBG] from='client.14458 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:41:42.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:42 vm07 bash[20848]: cluster 2026-03-06T22:41:40.765173+0000 mgr.vm02.opvwec (mgr.14199) 148 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:42.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:42 vm07 bash[20848]: cluster 2026-03-06T22:41:40.765173+0000 mgr.vm02.opvwec (mgr.14199) 148 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:42.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:42 vm02 bash[17013]: cluster 2026-03-06T22:41:40.765173+0000 mgr.vm02.opvwec (mgr.14199) 148 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:42.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:42 vm02 bash[17013]: cluster 2026-03-06T22:41:40.765173+0000 mgr.vm02.opvwec (mgr.14199) 148 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:44.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:44 vm07 bash[20848]: cluster 2026-03-06T22:41:42.765397+0000 mgr.vm02.opvwec (mgr.14199) 149 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:44.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:44 vm07 bash[20848]: cluster 2026-03-06T22:41:42.765397+0000 mgr.vm02.opvwec (mgr.14199) 149 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:44.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:44 vm02 bash[17013]: cluster 2026-03-06T22:41:42.765397+0000 mgr.vm02.opvwec (mgr.14199) 149 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:44.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:44 vm02 bash[17013]: cluster 2026-03-06T22:41:42.765397+0000 mgr.vm02.opvwec (mgr.14199) 149 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:45.507 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:41:45.887 INFO:teuthology.orchestra.run.vm02.stdout:HOST PATH TYPE DEVICE ID SIZE AVAILABLE REFRESHED REJECT REASONS 2026-03-06T23:41:45.887 INFO:teuthology.orchestra.run.vm02.stdout:vm02 /dev/sr0 hdd QEMU_DVD-ROM_QM00003 366k No 84s ago Has a FileSystem, Insufficient space (<5GB) 2026-03-06T23:41:45.887 INFO:teuthology.orchestra.run.vm02.stdout:vm02 /dev/vdb hdd DWNBRSTVMM02001 20.0G No 84s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-06T23:41:45.887 INFO:teuthology.orchestra.run.vm02.stdout:vm02 /dev/vdc hdd DWNBRSTVMM02002 20.0G No 84s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-06T23:41:45.887 INFO:teuthology.orchestra.run.vm02.stdout:vm02 /dev/vdd hdd DWNBRSTVMM02003 20.0G No 84s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-06T23:41:45.887 INFO:teuthology.orchestra.run.vm02.stdout:vm02 /dev/vde hdd DWNBRSTVMM02004 20.0G No 84s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-06T23:41:45.887 INFO:teuthology.orchestra.run.vm02.stdout:vm07 /dev/sr0 hdd QEMU_DVD-ROM_QM00003 366k No 83s ago Has a FileSystem, Insufficient space (<5GB) 2026-03-06T23:41:45.887 INFO:teuthology.orchestra.run.vm02.stdout:vm07 /dev/vdb hdd DWNBRSTVMM07001 20.0G No 83s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-06T23:41:45.887 INFO:teuthology.orchestra.run.vm02.stdout:vm07 /dev/vdc hdd DWNBRSTVMM07002 20.0G No 83s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-06T23:41:45.887 INFO:teuthology.orchestra.run.vm02.stdout:vm07 /dev/vdd hdd DWNBRSTVMM07003 20.0G No 83s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-06T23:41:45.887 INFO:teuthology.orchestra.run.vm02.stdout:vm07 /dev/vde hdd DWNBRSTVMM07004 20.0G No 83s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-06T23:41:45.947 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-06T23:41:45.949 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm02.local 2026-03-06T23:41:45.949 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- bash -c 'ceph orch apply jaeger' 2026-03-06T23:41:46.189 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:46 vm02 bash[17013]: cluster 2026-03-06T22:41:44.765632+0000 mgr.vm02.opvwec (mgr.14199) 150 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:46.189 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:46 vm02 bash[17013]: cluster 2026-03-06T22:41:44.765632+0000 mgr.vm02.opvwec (mgr.14199) 150 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:46.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:46 vm07 bash[20848]: cluster 2026-03-06T22:41:44.765632+0000 mgr.vm02.opvwec (mgr.14199) 150 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:46.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:46 vm07 bash[20848]: cluster 2026-03-06T22:41:44.765632+0000 mgr.vm02.opvwec (mgr.14199) 150 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:47.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:47 vm07 bash[20848]: audit 2026-03-06T22:41:45.880820+0000 mgr.vm02.opvwec (mgr.14199) 151 : audit [DBG] from='client.14462 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:41:47.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:47 vm07 bash[20848]: audit 2026-03-06T22:41:45.880820+0000 mgr.vm02.opvwec (mgr.14199) 151 : audit [DBG] from='client.14462 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:41:47.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:47 vm02 bash[17013]: audit 2026-03-06T22:41:45.880820+0000 mgr.vm02.opvwec (mgr.14199) 151 : audit [DBG] from='client.14462 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:41:47.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:47 vm02 bash[17013]: audit 2026-03-06T22:41:45.880820+0000 mgr.vm02.opvwec (mgr.14199) 151 : audit [DBG] from='client.14462 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:41:48.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:48 vm07 bash[20848]: cluster 2026-03-06T22:41:46.765885+0000 mgr.vm02.opvwec (mgr.14199) 152 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:48.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:48 vm07 bash[20848]: cluster 2026-03-06T22:41:46.765885+0000 mgr.vm02.opvwec (mgr.14199) 152 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:48.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:48 vm02 bash[17013]: cluster 2026-03-06T22:41:46.765885+0000 mgr.vm02.opvwec (mgr.14199) 152 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:48.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:48 vm02 bash[17013]: cluster 2026-03-06T22:41:46.765885+0000 mgr.vm02.opvwec (mgr.14199) 152 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:50.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:50 vm07 bash[20848]: cluster 2026-03-06T22:41:48.766147+0000 mgr.vm02.opvwec (mgr.14199) 153 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:50.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:50 vm07 bash[20848]: cluster 2026-03-06T22:41:48.766147+0000 mgr.vm02.opvwec (mgr.14199) 153 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:50.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:50 vm02 bash[17013]: cluster 2026-03-06T22:41:48.766147+0000 mgr.vm02.opvwec (mgr.14199) 153 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:50.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:50 vm02 bash[17013]: cluster 2026-03-06T22:41:48.766147+0000 mgr.vm02.opvwec (mgr.14199) 153 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:50.728 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:41:51.152 INFO:teuthology.orchestra.run.vm02.stdout:Scheduled elasticsearch update... 2026-03-06T23:41:51.152 INFO:teuthology.orchestra.run.vm02.stdout:Scheduled jaeger-collector update... 2026-03-06T23:41:51.152 INFO:teuthology.orchestra.run.vm02.stdout:Scheduled jaeger-query update... 2026-03-06T23:41:51.152 INFO:teuthology.orchestra.run.vm02.stdout:Scheduled jaeger-agent update... 2026-03-06T23:41:51.261 INFO:teuthology.run_tasks:Running task cephadm.wait_for_service... 2026-03-06T23:41:51.263 INFO:tasks.cephadm:Waiting for ceph service elasticsearch to start (timeout 300)... 2026-03-06T23:41:51.263 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph orch ls -f json 2026-03-06T23:41:51.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:51 vm07 bash[20848]: audit 2026-03-06T22:41:50.801099+0000 mon.vm02 (mon.0) 639 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:41:51.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:51 vm07 bash[20848]: audit 2026-03-06T22:41:50.801099+0000 mon.vm02 (mon.0) 639 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:41:51.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:51 vm07 bash[20848]: audit 2026-03-06T22:41:51.121089+0000 mon.vm02 (mon.0) 640 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:51.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:51 vm07 bash[20848]: audit 2026-03-06T22:41:51.121089+0000 mon.vm02 (mon.0) 640 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:51.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:51 vm07 bash[20848]: audit 2026-03-06T22:41:51.122067+0000 mon.vm02 (mon.0) 641 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:41:51.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:51 vm07 bash[20848]: audit 2026-03-06T22:41:51.122067+0000 mon.vm02 (mon.0) 641 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:41:51.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:51 vm07 bash[20848]: audit 2026-03-06T22:41:51.129836+0000 mon.vm02 (mon.0) 642 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:51.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:51 vm07 bash[20848]: audit 2026-03-06T22:41:51.129836+0000 mon.vm02 (mon.0) 642 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:51.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:51 vm07 bash[20848]: audit 2026-03-06T22:41:51.138498+0000 mon.vm02 (mon.0) 643 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:51.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:51 vm07 bash[20848]: audit 2026-03-06T22:41:51.138498+0000 mon.vm02 (mon.0) 643 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:51.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:51 vm07 bash[20848]: audit 2026-03-06T22:41:51.143806+0000 mon.vm02 (mon.0) 644 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:51.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:51 vm07 bash[20848]: audit 2026-03-06T22:41:51.143806+0000 mon.vm02 (mon.0) 644 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:51.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:51 vm02 bash[17013]: audit 2026-03-06T22:41:50.801099+0000 mon.vm02 (mon.0) 639 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:41:51.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:51 vm02 bash[17013]: audit 2026-03-06T22:41:50.801099+0000 mon.vm02 (mon.0) 639 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:41:51.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:51 vm02 bash[17013]: audit 2026-03-06T22:41:51.121089+0000 mon.vm02 (mon.0) 640 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:51.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:51 vm02 bash[17013]: audit 2026-03-06T22:41:51.121089+0000 mon.vm02 (mon.0) 640 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:51.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:51 vm02 bash[17013]: audit 2026-03-06T22:41:51.122067+0000 mon.vm02 (mon.0) 641 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:41:51.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:51 vm02 bash[17013]: audit 2026-03-06T22:41:51.122067+0000 mon.vm02 (mon.0) 641 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:41:51.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:51 vm02 bash[17013]: audit 2026-03-06T22:41:51.129836+0000 mon.vm02 (mon.0) 642 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:51.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:51 vm02 bash[17013]: audit 2026-03-06T22:41:51.129836+0000 mon.vm02 (mon.0) 642 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:51.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:51 vm02 bash[17013]: audit 2026-03-06T22:41:51.138498+0000 mon.vm02 (mon.0) 643 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:51.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:51 vm02 bash[17013]: audit 2026-03-06T22:41:51.138498+0000 mon.vm02 (mon.0) 643 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:51.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:51 vm02 bash[17013]: audit 2026-03-06T22:41:51.143806+0000 mon.vm02 (mon.0) 644 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:51.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:51 vm02 bash[17013]: audit 2026-03-06T22:41:51.143806+0000 mon.vm02 (mon.0) 644 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:52.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:52 vm07 bash[20848]: cluster 2026-03-06T22:41:50.766408+0000 mgr.vm02.opvwec (mgr.14199) 154 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:52.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:52 vm07 bash[20848]: cluster 2026-03-06T22:41:50.766408+0000 mgr.vm02.opvwec (mgr.14199) 154 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:52.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:52 vm07 bash[20848]: audit 2026-03-06T22:41:51.115513+0000 mgr.vm02.opvwec (mgr.14199) 155 : audit [DBG] from='client.14466 -' entity='client.admin' cmd=[{"prefix": "orch apply jaeger", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:41:52.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:52 vm07 bash[20848]: audit 2026-03-06T22:41:51.115513+0000 mgr.vm02.opvwec (mgr.14199) 155 : audit [DBG] from='client.14466 -' entity='client.admin' cmd=[{"prefix": "orch apply jaeger", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:41:52.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:52 vm07 bash[20848]: cephadm 2026-03-06T22:41:51.116467+0000 mgr.vm02.opvwec (mgr.14199) 156 : cephadm [INF] Saving service elasticsearch spec with placement count:1 2026-03-06T23:41:52.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:52 vm07 bash[20848]: cephadm 2026-03-06T22:41:51.116467+0000 mgr.vm02.opvwec (mgr.14199) 156 : cephadm [INF] Saving service elasticsearch spec with placement count:1 2026-03-06T23:41:52.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:52 vm07 bash[20848]: cephadm 2026-03-06T22:41:51.121422+0000 mgr.vm02.opvwec (mgr.14199) 157 : cephadm [INF] Saving service jaeger-collector spec with placement count:1 2026-03-06T23:41:52.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:52 vm07 bash[20848]: cephadm 2026-03-06T22:41:51.121422+0000 mgr.vm02.opvwec (mgr.14199) 157 : cephadm [INF] Saving service jaeger-collector spec with placement count:1 2026-03-06T23:41:52.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:52 vm07 bash[20848]: cephadm 2026-03-06T22:41:51.130855+0000 mgr.vm02.opvwec (mgr.14199) 158 : cephadm [INF] Saving service jaeger-query spec with placement count:1 2026-03-06T23:41:52.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:52 vm07 bash[20848]: cephadm 2026-03-06T22:41:51.130855+0000 mgr.vm02.opvwec (mgr.14199) 158 : cephadm [INF] Saving service jaeger-query spec with placement count:1 2026-03-06T23:41:52.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:52 vm07 bash[20848]: cephadm 2026-03-06T22:41:51.139428+0000 mgr.vm02.opvwec (mgr.14199) 159 : cephadm [INF] Saving service jaeger-agent spec with placement * 2026-03-06T23:41:52.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:52 vm07 bash[20848]: cephadm 2026-03-06T22:41:51.139428+0000 mgr.vm02.opvwec (mgr.14199) 159 : cephadm [INF] Saving service jaeger-agent spec with placement * 2026-03-06T23:41:52.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:52 vm02 bash[17013]: cluster 2026-03-06T22:41:50.766408+0000 mgr.vm02.opvwec (mgr.14199) 154 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:52.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:52 vm02 bash[17013]: cluster 2026-03-06T22:41:50.766408+0000 mgr.vm02.opvwec (mgr.14199) 154 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:52.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:52 vm02 bash[17013]: audit 2026-03-06T22:41:51.115513+0000 mgr.vm02.opvwec (mgr.14199) 155 : audit [DBG] from='client.14466 -' entity='client.admin' cmd=[{"prefix": "orch apply jaeger", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:41:52.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:52 vm02 bash[17013]: audit 2026-03-06T22:41:51.115513+0000 mgr.vm02.opvwec (mgr.14199) 155 : audit [DBG] from='client.14466 -' entity='client.admin' cmd=[{"prefix": "orch apply jaeger", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:41:52.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:52 vm02 bash[17013]: cephadm 2026-03-06T22:41:51.116467+0000 mgr.vm02.opvwec (mgr.14199) 156 : cephadm [INF] Saving service elasticsearch spec with placement count:1 2026-03-06T23:41:52.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:52 vm02 bash[17013]: cephadm 2026-03-06T22:41:51.116467+0000 mgr.vm02.opvwec (mgr.14199) 156 : cephadm [INF] Saving service elasticsearch spec with placement count:1 2026-03-06T23:41:52.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:52 vm02 bash[17013]: cephadm 2026-03-06T22:41:51.121422+0000 mgr.vm02.opvwec (mgr.14199) 157 : cephadm [INF] Saving service jaeger-collector spec with placement count:1 2026-03-06T23:41:52.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:52 vm02 bash[17013]: cephadm 2026-03-06T22:41:51.121422+0000 mgr.vm02.opvwec (mgr.14199) 157 : cephadm [INF] Saving service jaeger-collector spec with placement count:1 2026-03-06T23:41:52.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:52 vm02 bash[17013]: cephadm 2026-03-06T22:41:51.130855+0000 mgr.vm02.opvwec (mgr.14199) 158 : cephadm [INF] Saving service jaeger-query spec with placement count:1 2026-03-06T23:41:52.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:52 vm02 bash[17013]: cephadm 2026-03-06T22:41:51.130855+0000 mgr.vm02.opvwec (mgr.14199) 158 : cephadm [INF] Saving service jaeger-query spec with placement count:1 2026-03-06T23:41:52.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:52 vm02 bash[17013]: cephadm 2026-03-06T22:41:51.139428+0000 mgr.vm02.opvwec (mgr.14199) 159 : cephadm [INF] Saving service jaeger-agent spec with placement * 2026-03-06T23:41:52.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:52 vm02 bash[17013]: cephadm 2026-03-06T22:41:51.139428+0000 mgr.vm02.opvwec (mgr.14199) 159 : cephadm [INF] Saving service jaeger-agent spec with placement * 2026-03-06T23:41:54.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:54 vm07 bash[20848]: cluster 2026-03-06T22:41:52.766697+0000 mgr.vm02.opvwec (mgr.14199) 160 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:54.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:54 vm07 bash[20848]: cluster 2026-03-06T22:41:52.766697+0000 mgr.vm02.opvwec (mgr.14199) 160 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:54.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:54 vm02 bash[17013]: cluster 2026-03-06T22:41:52.766697+0000 mgr.vm02.opvwec (mgr.14199) 160 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:54.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:54 vm02 bash[17013]: cluster 2026-03-06T22:41:52.766697+0000 mgr.vm02.opvwec (mgr.14199) 160 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:55.948 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:41:56.403 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:41:56.403 INFO:teuthology.orchestra.run.vm02.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-06T22:37:53.328214Z", "last_refresh": "2026-03-06T22:41:27.481164Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:38:58.838036Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-06T22:37:51.782353Z", "last_refresh": "2026-03-06T22:41:27.481352Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:38:59.749903Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-06T22:37:51.310617Z", "last_refresh": "2026-03-06T22:41:27.481218Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:41:51.121275Z service:elasticsearch [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "elasticsearch", "service_type": "elasticsearch", "status": {"created": "2026-03-06T22:41:51.116481Z", "ports": [9200], "running": 0, "size": 1}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-06T22:37:52.581776Z", "last_refresh": "2026-03-06T22:41:27.481136Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:41:51.143980Z service:jaeger-agent [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "jaeger-agent", "service_type": "jaeger-agent", "status": {"created": "2026-03-06T22:41:51.139431Z", "ports": [6799], "running": 0, "size": 2}}, {"events": ["2026-03-06T22:41:51.130677Z service:jaeger-collector [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "jaeger-collector", "service_type": "jaeger-collector", "status": {"created": "2026-03-06T22:41:51.121608Z", "ports": [14250], "running": 0, "size": 1}}, {"events": ["2026-03-06T22:41:51.139255Z service:jaeger-query [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "jaeger-query", "service_type": "jaeger-query", "status": {"created": "2026-03-06T22:41:51.130859Z", "ports": [16686], "running": 0, "size": 1}}, {"events": ["2026-03-06T22:39:01.334137Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-06T22:37:50.865444Z", "last_refresh": "2026-03-06T22:41:27.481272Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:02.384235Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm02:192.168.123.102=vm02", "vm07:192.168.123.107=vm07"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-06T22:38:51.745852Z", "last_refresh": "2026-03-06T22:41:27.481191Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:00.489648Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-06T22:37:52.956113Z", "last_refresh": "2026-03-06T22:41:27.481298Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:21.440144Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-06T22:39:21.434901Z", "last_refresh": "2026-03-06T22:41:27.481058Z", "running": 8, "size": 8}}, {"events": ["2026-03-06T22:39:02.388064Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-06T22:37:52.205852Z", "last_refresh": "2026-03-06T22:41:27.481244Z", "ports": [9095], "running": 1, "size": 1}}] 2026-03-06T23:41:56.485 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:56 vm02 bash[17013]: cluster 2026-03-06T22:41:54.767087+0000 mgr.vm02.opvwec (mgr.14199) 161 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:56.485 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:56 vm02 bash[17013]: cluster 2026-03-06T22:41:54.767087+0000 mgr.vm02.opvwec (mgr.14199) 161 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:56.485 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:56 vm02 bash[17013]: audit 2026-03-06T22:41:56.218435+0000 mon.vm02 (mon.0) 645 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:56.485 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:56 vm02 bash[17013]: audit 2026-03-06T22:41:56.218435+0000 mon.vm02 (mon.0) 645 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:56.486 INFO:tasks.cephadm:elasticsearch has 0/1 2026-03-06T23:41:56.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:56 vm07 bash[20848]: cluster 2026-03-06T22:41:54.767087+0000 mgr.vm02.opvwec (mgr.14199) 161 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:56.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:56 vm07 bash[20848]: cluster 2026-03-06T22:41:54.767087+0000 mgr.vm02.opvwec (mgr.14199) 161 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:56.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:56 vm07 bash[20848]: audit 2026-03-06T22:41:56.218435+0000 mon.vm02 (mon.0) 645 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:56.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:56 vm07 bash[20848]: audit 2026-03-06T22:41:56.218435+0000 mon.vm02 (mon.0) 645 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:57.228 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:57 vm07 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:41:57.487 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph orch ls -f json 2026-03-06T23:41:57.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:57 vm02 bash[17013]: audit 2026-03-06T22:41:56.224081+0000 mon.vm02 (mon.0) 646 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:57.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:57 vm02 bash[17013]: audit 2026-03-06T22:41:56.224081+0000 mon.vm02 (mon.0) 646 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:57.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:57 vm02 bash[17013]: audit 2026-03-06T22:41:56.396418+0000 mgr.vm02.opvwec (mgr.14199) 162 : audit [DBG] from='client.14470 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:41:57.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:57 vm02 bash[17013]: audit 2026-03-06T22:41:56.396418+0000 mgr.vm02.opvwec (mgr.14199) 162 : audit [DBG] from='client.14470 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:41:57.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:57 vm02 bash[17013]: audit 2026-03-06T22:41:56.658867+0000 mon.vm02 (mon.0) 647 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:57.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:57 vm02 bash[17013]: audit 2026-03-06T22:41:56.658867+0000 mon.vm02 (mon.0) 647 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:57.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:57 vm02 bash[17013]: audit 2026-03-06T22:41:56.664902+0000 mon.vm02 (mon.0) 648 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:57.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:57 vm02 bash[17013]: audit 2026-03-06T22:41:56.664902+0000 mon.vm02 (mon.0) 648 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:57.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:57 vm02 bash[17013]: audit 2026-03-06T22:41:56.666502+0000 mon.vm02 (mon.0) 649 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:41:57.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:57 vm02 bash[17013]: audit 2026-03-06T22:41:56.666502+0000 mon.vm02 (mon.0) 649 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:41:57.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:57 vm02 bash[17013]: audit 2026-03-06T22:41:56.667026+0000 mon.vm02 (mon.0) 650 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T23:41:57.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:57 vm02 bash[17013]: audit 2026-03-06T22:41:56.667026+0000 mon.vm02 (mon.0) 650 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T23:41:57.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:57 vm02 bash[17013]: audit 2026-03-06T22:41:56.671199+0000 mon.vm02 (mon.0) 651 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:57.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:57 vm02 bash[17013]: audit 2026-03-06T22:41:56.671199+0000 mon.vm02 (mon.0) 651 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:57.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:57 vm02 bash[17013]: audit 2026-03-06T22:41:56.672916+0000 mon.vm02 (mon.0) 652 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-06T23:41:57.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:57 vm02 bash[17013]: audit 2026-03-06T22:41:56.672916+0000 mon.vm02 (mon.0) 652 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-06T23:41:57.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:57 vm02 bash[17013]: cephadm 2026-03-06T22:41:56.675180+0000 mgr.vm02.opvwec (mgr.14199) 163 : cephadm [INF] Deploying daemon jaeger-agent.vm07 on vm07 2026-03-06T23:41:57.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:57 vm02 bash[17013]: cephadm 2026-03-06T22:41:56.675180+0000 mgr.vm02.opvwec (mgr.14199) 163 : cephadm [INF] Deploying daemon jaeger-agent.vm07 on vm07 2026-03-06T23:41:57.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:57 vm07 bash[20848]: audit 2026-03-06T22:41:56.224081+0000 mon.vm02 (mon.0) 646 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:57.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:57 vm07 bash[20848]: audit 2026-03-06T22:41:56.224081+0000 mon.vm02 (mon.0) 646 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:57.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:57 vm07 bash[20848]: audit 2026-03-06T22:41:56.396418+0000 mgr.vm02.opvwec (mgr.14199) 162 : audit [DBG] from='client.14470 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:41:57.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:57 vm07 bash[20848]: audit 2026-03-06T22:41:56.396418+0000 mgr.vm02.opvwec (mgr.14199) 162 : audit [DBG] from='client.14470 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:41:57.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:57 vm07 bash[20848]: audit 2026-03-06T22:41:56.658867+0000 mon.vm02 (mon.0) 647 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:57.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:57 vm07 bash[20848]: audit 2026-03-06T22:41:56.658867+0000 mon.vm02 (mon.0) 647 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:57.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:57 vm07 bash[20848]: audit 2026-03-06T22:41:56.664902+0000 mon.vm02 (mon.0) 648 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:57.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:57 vm07 bash[20848]: audit 2026-03-06T22:41:56.664902+0000 mon.vm02 (mon.0) 648 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:57.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:57 vm07 bash[20848]: audit 2026-03-06T22:41:56.666502+0000 mon.vm02 (mon.0) 649 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:41:57.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:57 vm07 bash[20848]: audit 2026-03-06T22:41:56.666502+0000 mon.vm02 (mon.0) 649 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:41:57.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:57 vm07 bash[20848]: audit 2026-03-06T22:41:56.667026+0000 mon.vm02 (mon.0) 650 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T23:41:57.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:57 vm07 bash[20848]: audit 2026-03-06T22:41:56.667026+0000 mon.vm02 (mon.0) 650 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T23:41:57.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:57 vm07 bash[20848]: audit 2026-03-06T22:41:56.671199+0000 mon.vm02 (mon.0) 651 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:57.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:57 vm07 bash[20848]: audit 2026-03-06T22:41:56.671199+0000 mon.vm02 (mon.0) 651 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:57.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:57 vm07 bash[20848]: audit 2026-03-06T22:41:56.672916+0000 mon.vm02 (mon.0) 652 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-06T23:41:57.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:57 vm07 bash[20848]: audit 2026-03-06T22:41:56.672916+0000 mon.vm02 (mon.0) 652 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-06T23:41:57.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:57 vm07 bash[20848]: cephadm 2026-03-06T22:41:56.675180+0000 mgr.vm02.opvwec (mgr.14199) 163 : cephadm [INF] Deploying daemon jaeger-agent.vm07 on vm07 2026-03-06T23:41:57.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:57 vm07 bash[20848]: cephadm 2026-03-06T22:41:56.675180+0000 mgr.vm02.opvwec (mgr.14199) 163 : cephadm [INF] Deploying daemon jaeger-agent.vm07 on vm07 2026-03-06T23:41:57.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:57 vm07 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:41:57.899 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:57 vm02 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:41:58.173 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:58 vm02 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:41:58.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:58 vm02 bash[17013]: cluster 2026-03-06T22:41:56.767296+0000 mgr.vm02.opvwec (mgr.14199) 164 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:58.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:58 vm02 bash[17013]: cluster 2026-03-06T22:41:56.767296+0000 mgr.vm02.opvwec (mgr.14199) 164 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:58.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:58 vm02 bash[17013]: audit 2026-03-06T22:41:57.374049+0000 mon.vm02 (mon.0) 653 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:58.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:58 vm02 bash[17013]: audit 2026-03-06T22:41:57.374049+0000 mon.vm02 (mon.0) 653 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:58.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:58 vm02 bash[17013]: audit 2026-03-06T22:41:57.379004+0000 mon.vm02 (mon.0) 654 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:58.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:58 vm02 bash[17013]: audit 2026-03-06T22:41:57.379004+0000 mon.vm02 (mon.0) 654 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:58.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:58 vm02 bash[17013]: audit 2026-03-06T22:41:57.382870+0000 mon.vm02 (mon.0) 655 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:58.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:58 vm02 bash[17013]: audit 2026-03-06T22:41:57.382870+0000 mon.vm02 (mon.0) 655 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:58.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:58 vm02 bash[17013]: cephadm 2026-03-06T22:41:57.383406+0000 mgr.vm02.opvwec (mgr.14199) 165 : cephadm [INF] Deploying daemon jaeger-agent.vm02 on vm02 2026-03-06T23:41:58.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:58 vm02 bash[17013]: cephadm 2026-03-06T22:41:57.383406+0000 mgr.vm02.opvwec (mgr.14199) 165 : cephadm [INF] Deploying daemon jaeger-agent.vm02 on vm02 2026-03-06T23:41:58.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:58 vm02 bash[17013]: audit 2026-03-06T22:41:58.175947+0000 mon.vm02 (mon.0) 656 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:58.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:58 vm02 bash[17013]: audit 2026-03-06T22:41:58.175947+0000 mon.vm02 (mon.0) 656 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:58.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:58 vm02 bash[17013]: audit 2026-03-06T22:41:58.181681+0000 mon.vm02 (mon.0) 657 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:58.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:58 vm02 bash[17013]: audit 2026-03-06T22:41:58.181681+0000 mon.vm02 (mon.0) 657 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:58.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:58 vm02 bash[17013]: audit 2026-03-06T22:41:58.185492+0000 mon.vm02 (mon.0) 658 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:58.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:58 vm02 bash[17013]: audit 2026-03-06T22:41:58.185492+0000 mon.vm02 (mon.0) 658 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:58.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:58 vm02 bash[17013]: audit 2026-03-06T22:41:58.188568+0000 mon.vm02 (mon.0) 659 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:58.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:58 vm02 bash[17013]: audit 2026-03-06T22:41:58.188568+0000 mon.vm02 (mon.0) 659 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:58.700 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:58 vm07 bash[20848]: cluster 2026-03-06T22:41:56.767296+0000 mgr.vm02.opvwec (mgr.14199) 164 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:58.700 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:58 vm07 bash[20848]: cluster 2026-03-06T22:41:56.767296+0000 mgr.vm02.opvwec (mgr.14199) 164 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:41:58.700 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:58 vm07 bash[20848]: audit 2026-03-06T22:41:57.374049+0000 mon.vm02 (mon.0) 653 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:58.700 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:58 vm07 bash[20848]: audit 2026-03-06T22:41:57.374049+0000 mon.vm02 (mon.0) 653 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:58.700 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:58 vm07 bash[20848]: audit 2026-03-06T22:41:57.379004+0000 mon.vm02 (mon.0) 654 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:58.700 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:58 vm07 bash[20848]: audit 2026-03-06T22:41:57.379004+0000 mon.vm02 (mon.0) 654 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:58.700 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:58 vm07 bash[20848]: audit 2026-03-06T22:41:57.382870+0000 mon.vm02 (mon.0) 655 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:58.700 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:58 vm07 bash[20848]: audit 2026-03-06T22:41:57.382870+0000 mon.vm02 (mon.0) 655 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:58.700 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:58 vm07 bash[20848]: cephadm 2026-03-06T22:41:57.383406+0000 mgr.vm02.opvwec (mgr.14199) 165 : cephadm [INF] Deploying daemon jaeger-agent.vm02 on vm02 2026-03-06T23:41:58.700 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:58 vm07 bash[20848]: cephadm 2026-03-06T22:41:57.383406+0000 mgr.vm02.opvwec (mgr.14199) 165 : cephadm [INF] Deploying daemon jaeger-agent.vm02 on vm02 2026-03-06T23:41:58.700 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:58 vm07 bash[20848]: audit 2026-03-06T22:41:58.175947+0000 mon.vm02 (mon.0) 656 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:58.700 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:58 vm07 bash[20848]: audit 2026-03-06T22:41:58.175947+0000 mon.vm02 (mon.0) 656 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:58.700 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:58 vm07 bash[20848]: audit 2026-03-06T22:41:58.181681+0000 mon.vm02 (mon.0) 657 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:58.700 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:58 vm07 bash[20848]: audit 2026-03-06T22:41:58.181681+0000 mon.vm02 (mon.0) 657 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:58.700 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:58 vm07 bash[20848]: audit 2026-03-06T22:41:58.185492+0000 mon.vm02 (mon.0) 658 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:58.700 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:58 vm07 bash[20848]: audit 2026-03-06T22:41:58.185492+0000 mon.vm02 (mon.0) 658 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:58.700 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:58 vm07 bash[20848]: audit 2026-03-06T22:41:58.188568+0000 mon.vm02 (mon.0) 659 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:58.700 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:58 vm07 bash[20848]: audit 2026-03-06T22:41:58.188568+0000 mon.vm02 (mon.0) 659 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:58.700 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:58 vm07 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:41:58.963 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:58 vm07 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:41:59.477 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:59 vm02 bash[17013]: cephadm 2026-03-06T22:41:58.190429+0000 mgr.vm02.opvwec (mgr.14199) 166 : cephadm [INF] Deploying daemon elasticsearch.vm07 on vm07 2026-03-06T23:41:59.477 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:59 vm02 bash[17013]: cephadm 2026-03-06T22:41:58.190429+0000 mgr.vm02.opvwec (mgr.14199) 166 : cephadm [INF] Deploying daemon elasticsearch.vm07 on vm07 2026-03-06T23:41:59.478 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:59 vm02 bash[17013]: audit 2026-03-06T22:41:58.947759+0000 mon.vm02 (mon.0) 660 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:59.478 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:59 vm02 bash[17013]: audit 2026-03-06T22:41:58.947759+0000 mon.vm02 (mon.0) 660 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:59.478 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:59 vm02 bash[17013]: audit 2026-03-06T22:41:58.952372+0000 mon.vm02 (mon.0) 661 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:59.478 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:59 vm02 bash[17013]: audit 2026-03-06T22:41:58.952372+0000 mon.vm02 (mon.0) 661 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:59.478 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:59 vm02 bash[17013]: audit 2026-03-06T22:41:58.955977+0000 mon.vm02 (mon.0) 662 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:59.478 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:59 vm02 bash[17013]: audit 2026-03-06T22:41:58.955977+0000 mon.vm02 (mon.0) 662 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:59.478 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:59 vm02 bash[17013]: audit 2026-03-06T22:41:58.959260+0000 mon.vm02 (mon.0) 663 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:59.478 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:59 vm02 bash[17013]: audit 2026-03-06T22:41:58.959260+0000 mon.vm02 (mon.0) 663 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:59.478 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:59 vm02 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:41:59.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:59 vm07 bash[20848]: cephadm 2026-03-06T22:41:58.190429+0000 mgr.vm02.opvwec (mgr.14199) 166 : cephadm [INF] Deploying daemon elasticsearch.vm07 on vm07 2026-03-06T23:41:59.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:59 vm07 bash[20848]: cephadm 2026-03-06T22:41:58.190429+0000 mgr.vm02.opvwec (mgr.14199) 166 : cephadm [INF] Deploying daemon elasticsearch.vm07 on vm07 2026-03-06T23:41:59.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:59 vm07 bash[20848]: audit 2026-03-06T22:41:58.947759+0000 mon.vm02 (mon.0) 660 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:59.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:59 vm07 bash[20848]: audit 2026-03-06T22:41:58.947759+0000 mon.vm02 (mon.0) 660 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:59.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:59 vm07 bash[20848]: audit 2026-03-06T22:41:58.952372+0000 mon.vm02 (mon.0) 661 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:59.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:59 vm07 bash[20848]: audit 2026-03-06T22:41:58.952372+0000 mon.vm02 (mon.0) 661 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:59.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:59 vm07 bash[20848]: audit 2026-03-06T22:41:58.955977+0000 mon.vm02 (mon.0) 662 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:59.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:59 vm07 bash[20848]: audit 2026-03-06T22:41:58.955977+0000 mon.vm02 (mon.0) 662 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:59.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:59 vm07 bash[20848]: audit 2026-03-06T22:41:58.959260+0000 mon.vm02 (mon.0) 663 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:59.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:41:59 vm07 bash[20848]: audit 2026-03-06T22:41:58.959260+0000 mon.vm02 (mon.0) 663 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:41:59.742 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:41:59 vm02 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:42:00.447 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:00 vm07 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:42:00.447 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:00 vm07 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:42:00.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:00 vm07 bash[20848]: cluster 2026-03-06T22:41:58.767547+0000 mgr.vm02.opvwec (mgr.14199) 167 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:00.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:00 vm07 bash[20848]: cluster 2026-03-06T22:41:58.767547+0000 mgr.vm02.opvwec (mgr.14199) 167 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:00.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:00 vm07 bash[20848]: cephadm 2026-03-06T22:41:58.960299+0000 mgr.vm02.opvwec (mgr.14199) 168 : cephadm [INF] Deploying daemon jaeger-collector.vm02 on vm02 2026-03-06T23:42:00.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:00 vm07 bash[20848]: cephadm 2026-03-06T22:41:58.960299+0000 mgr.vm02.opvwec (mgr.14199) 168 : cephadm [INF] Deploying daemon jaeger-collector.vm02 on vm02 2026-03-06T23:42:00.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:00 vm07 bash[20848]: audit 2026-03-06T22:41:59.685917+0000 mon.vm02 (mon.0) 664 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:00.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:00 vm07 bash[20848]: audit 2026-03-06T22:41:59.685917+0000 mon.vm02 (mon.0) 664 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:00.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:00 vm07 bash[20848]: audit 2026-03-06T22:41:59.691308+0000 mon.vm02 (mon.0) 665 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:00.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:00 vm07 bash[20848]: audit 2026-03-06T22:41:59.691308+0000 mon.vm02 (mon.0) 665 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:00.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:00 vm07 bash[20848]: audit 2026-03-06T22:41:59.697543+0000 mon.vm02 (mon.0) 666 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:00.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:00 vm07 bash[20848]: audit 2026-03-06T22:41:59.697543+0000 mon.vm02 (mon.0) 666 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:00.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:00 vm07 bash[20848]: audit 2026-03-06T22:41:59.701108+0000 mon.vm02 (mon.0) 667 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:00.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:00 vm07 bash[20848]: audit 2026-03-06T22:41:59.701108+0000 mon.vm02 (mon.0) 667 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:00.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:00 vm07 bash[20848]: cephadm 2026-03-06T22:41:59.702275+0000 mgr.vm02.opvwec (mgr.14199) 169 : cephadm [INF] Deploying daemon jaeger-query.vm07 on vm07 2026-03-06T23:42:00.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:00 vm07 bash[20848]: cephadm 2026-03-06T22:41:59.702275+0000 mgr.vm02.opvwec (mgr.14199) 169 : cephadm [INF] Deploying daemon jaeger-query.vm07 on vm07 2026-03-06T23:42:00.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:00 vm07 bash[20848]: audit 2026-03-06T22:42:00.479700+0000 mon.vm02 (mon.0) 668 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:00.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:00 vm07 bash[20848]: audit 2026-03-06T22:42:00.479700+0000 mon.vm02 (mon.0) 668 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:00.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:00 vm07 bash[20848]: audit 2026-03-06T22:42:00.485025+0000 mon.vm02 (mon.0) 669 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:00.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:00 vm07 bash[20848]: audit 2026-03-06T22:42:00.485025+0000 mon.vm02 (mon.0) 669 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:00.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:00 vm07 bash[20848]: audit 2026-03-06T22:42:00.490394+0000 mon.vm02 (mon.0) 670 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:00.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:00 vm07 bash[20848]: audit 2026-03-06T22:42:00.490394+0000 mon.vm02 (mon.0) 670 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:00.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:00 vm07 bash[20848]: audit 2026-03-06T22:42:00.496600+0000 mon.vm02 (mon.0) 671 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:00.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:00 vm07 bash[20848]: audit 2026-03-06T22:42:00.496600+0000 mon.vm02 (mon.0) 671 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:00.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:00 vm07 bash[20848]: audit 2026-03-06T22:42:00.506286+0000 mon.vm02 (mon.0) 672 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:42:00.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:00 vm07 bash[20848]: audit 2026-03-06T22:42:00.506286+0000 mon.vm02 (mon.0) 672 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:42:00.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:00 vm02 bash[17013]: cluster 2026-03-06T22:41:58.767547+0000 mgr.vm02.opvwec (mgr.14199) 167 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:00.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:00 vm02 bash[17013]: cluster 2026-03-06T22:41:58.767547+0000 mgr.vm02.opvwec (mgr.14199) 167 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:00.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:00 vm02 bash[17013]: cephadm 2026-03-06T22:41:58.960299+0000 mgr.vm02.opvwec (mgr.14199) 168 : cephadm [INF] Deploying daemon jaeger-collector.vm02 on vm02 2026-03-06T23:42:00.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:00 vm02 bash[17013]: cephadm 2026-03-06T22:41:58.960299+0000 mgr.vm02.opvwec (mgr.14199) 168 : cephadm [INF] Deploying daemon jaeger-collector.vm02 on vm02 2026-03-06T23:42:00.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:00 vm02 bash[17013]: audit 2026-03-06T22:41:59.685917+0000 mon.vm02 (mon.0) 664 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:00.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:00 vm02 bash[17013]: audit 2026-03-06T22:41:59.685917+0000 mon.vm02 (mon.0) 664 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:00.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:00 vm02 bash[17013]: audit 2026-03-06T22:41:59.691308+0000 mon.vm02 (mon.0) 665 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:00.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:00 vm02 bash[17013]: audit 2026-03-06T22:41:59.691308+0000 mon.vm02 (mon.0) 665 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:00.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:00 vm02 bash[17013]: audit 2026-03-06T22:41:59.697543+0000 mon.vm02 (mon.0) 666 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:00.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:00 vm02 bash[17013]: audit 2026-03-06T22:41:59.697543+0000 mon.vm02 (mon.0) 666 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:00.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:00 vm02 bash[17013]: audit 2026-03-06T22:41:59.701108+0000 mon.vm02 (mon.0) 667 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:00.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:00 vm02 bash[17013]: audit 2026-03-06T22:41:59.701108+0000 mon.vm02 (mon.0) 667 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:00.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:00 vm02 bash[17013]: cephadm 2026-03-06T22:41:59.702275+0000 mgr.vm02.opvwec (mgr.14199) 169 : cephadm [INF] Deploying daemon jaeger-query.vm07 on vm07 2026-03-06T23:42:00.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:00 vm02 bash[17013]: cephadm 2026-03-06T22:41:59.702275+0000 mgr.vm02.opvwec (mgr.14199) 169 : cephadm [INF] Deploying daemon jaeger-query.vm07 on vm07 2026-03-06T23:42:00.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:00 vm02 bash[17013]: audit 2026-03-06T22:42:00.479700+0000 mon.vm02 (mon.0) 668 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:00.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:00 vm02 bash[17013]: audit 2026-03-06T22:42:00.479700+0000 mon.vm02 (mon.0) 668 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:00.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:00 vm02 bash[17013]: audit 2026-03-06T22:42:00.485025+0000 mon.vm02 (mon.0) 669 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:00.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:00 vm02 bash[17013]: audit 2026-03-06T22:42:00.485025+0000 mon.vm02 (mon.0) 669 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:00.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:00 vm02 bash[17013]: audit 2026-03-06T22:42:00.490394+0000 mon.vm02 (mon.0) 670 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:00.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:00 vm02 bash[17013]: audit 2026-03-06T22:42:00.490394+0000 mon.vm02 (mon.0) 670 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:00.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:00 vm02 bash[17013]: audit 2026-03-06T22:42:00.496600+0000 mon.vm02 (mon.0) 671 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:00.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:00 vm02 bash[17013]: audit 2026-03-06T22:42:00.496600+0000 mon.vm02 (mon.0) 671 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:00.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:00 vm02 bash[17013]: audit 2026-03-06T22:42:00.506286+0000 mon.vm02 (mon.0) 672 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:42:00.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:00 vm02 bash[17013]: audit 2026-03-06T22:42:00.506286+0000 mon.vm02 (mon.0) 672 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:42:02.319 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:42:02.706 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:42:02.707 INFO:teuthology.orchestra.run.vm02.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-06T22:37:53.328214Z", "last_refresh": "2026-03-06T22:41:56.652398Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:38:58.838036Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-06T22:37:51.782353Z", "last_refresh": "2026-03-06T22:41:56.212087Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:38:59.749903Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-06T22:37:51.310617Z", "last_refresh": "2026-03-06T22:41:56.212058Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:41:58.959413Z service:elasticsearch [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "elasticsearch", "service_type": "elasticsearch", "status": {"created": "2026-03-06T22:41:51.116481Z", "ports": [9200], "running": 0, "size": 1}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-06T22:37:52.581776Z", "last_refresh": "2026-03-06T22:41:56.652369Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:41:58.188734Z service:jaeger-agent [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "jaeger-agent", "service_type": "jaeger-agent", "status": {"created": "2026-03-06T22:41:51.139431Z", "ports": [6799], "running": 0, "size": 2}}, {"events": ["2026-03-06T22:41:59.701265Z service:jaeger-collector [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "jaeger-collector", "service_type": "jaeger-collector", "status": {"created": "2026-03-06T22:41:51.121608Z", "ports": [14250], "running": 0, "size": 1}}, {"events": ["2026-03-06T22:42:00.496940Z service:jaeger-query [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "jaeger-query", "service_type": "jaeger-query", "status": {"created": "2026-03-06T22:41:51.130859Z", "ports": [16686], "running": 0, "size": 1}}, {"events": ["2026-03-06T22:39:01.334137Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-06T22:37:50.865444Z", "last_refresh": "2026-03-06T22:41:56.212000Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:02.384235Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm02:192.168.123.102=vm02", "vm07:192.168.123.107=vm07"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-06T22:38:51.745852Z", "last_refresh": "2026-03-06T22:41:56.211832Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:00.489648Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-06T22:37:52.956113Z", "last_refresh": "2026-03-06T22:41:56.211971Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:21.440144Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-06T22:39:21.434901Z", "last_refresh": "2026-03-06T22:41:56.211882Z", "running": 8, "size": 8}}, {"events": ["2026-03-06T22:39:02.388064Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-06T22:37:52.205852Z", "last_refresh": "2026-03-06T22:41:56.652480Z", "ports": [9095], "running": 1, "size": 1}}] 2026-03-06T23:42:02.800 INFO:tasks.cephadm:elasticsearch has 0/1 2026-03-06T23:42:02.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:02 vm07 bash[20848]: cluster 2026-03-06T22:42:00.767766+0000 mgr.vm02.opvwec (mgr.14199) 170 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:02.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:02 vm07 bash[20848]: cluster 2026-03-06T22:42:00.767766+0000 mgr.vm02.opvwec (mgr.14199) 170 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:02.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:02 vm02 bash[17013]: cluster 2026-03-06T22:42:00.767766+0000 mgr.vm02.opvwec (mgr.14199) 170 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:02.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:02 vm02 bash[17013]: cluster 2026-03-06T22:42:00.767766+0000 mgr.vm02.opvwec (mgr.14199) 170 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:03.801 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph orch ls -f json 2026-03-06T23:42:03.992 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:03 vm07 bash[20848]: audit 2026-03-06T22:42:02.699646+0000 mgr.vm02.opvwec (mgr.14199) 171 : audit [DBG] from='client.14474 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:42:03.993 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:03 vm07 bash[20848]: audit 2026-03-06T22:42:02.699646+0000 mgr.vm02.opvwec (mgr.14199) 171 : audit [DBG] from='client.14474 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:42:03.993 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:03 vm07 bash[20848]: audit 2026-03-06T22:42:02.722921+0000 mon.vm02 (mon.0) 673 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:03.993 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:03 vm07 bash[20848]: audit 2026-03-06T22:42:02.722921+0000 mon.vm02 (mon.0) 673 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:03.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:03 vm02 bash[17013]: audit 2026-03-06T22:42:02.699646+0000 mgr.vm02.opvwec (mgr.14199) 171 : audit [DBG] from='client.14474 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:42:03.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:03 vm02 bash[17013]: audit 2026-03-06T22:42:02.699646+0000 mgr.vm02.opvwec (mgr.14199) 171 : audit [DBG] from='client.14474 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:42:03.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:03 vm02 bash[17013]: audit 2026-03-06T22:42:02.722921+0000 mon.vm02 (mon.0) 673 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:03.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:03 vm02 bash[17013]: audit 2026-03-06T22:42:02.722921+0000 mon.vm02 (mon.0) 673 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:04.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:04 vm02 bash[17013]: cluster 2026-03-06T22:42:02.768043+0000 mgr.vm02.opvwec (mgr.14199) 172 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:04.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:04 vm02 bash[17013]: cluster 2026-03-06T22:42:02.768043+0000 mgr.vm02.opvwec (mgr.14199) 172 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:05.228 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:04 vm07 bash[20848]: cluster 2026-03-06T22:42:02.768043+0000 mgr.vm02.opvwec (mgr.14199) 172 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:05.228 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:04 vm07 bash[20848]: cluster 2026-03-06T22:42:02.768043+0000 mgr.vm02.opvwec (mgr.14199) 172 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:06.228 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:05 vm07 bash[20848]: cluster 2026-03-06T22:42:04.768335+0000 mgr.vm02.opvwec (mgr.14199) 173 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:06.229 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:05 vm07 bash[20848]: cluster 2026-03-06T22:42:04.768335+0000 mgr.vm02.opvwec (mgr.14199) 173 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:06.242 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:05 vm02 bash[17013]: cluster 2026-03-06T22:42:04.768335+0000 mgr.vm02.opvwec (mgr.14199) 173 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:06.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:05 vm02 bash[17013]: cluster 2026-03-06T22:42:04.768335+0000 mgr.vm02.opvwec (mgr.14199) 173 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:06.889 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:06 vm02 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:42:06.889 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:06 vm02 bash[17013]: audit 2026-03-06T22:42:05.804370+0000 mon.vm02 (mon.0) 674 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:42:06.889 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:06 vm02 bash[17013]: audit 2026-03-06T22:42:05.804370+0000 mon.vm02 (mon.0) 674 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:42:06.889 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:06 vm02 bash[17013]: audit 2026-03-06T22:42:06.187771+0000 mon.vm02 (mon.0) 675 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:06.889 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:06 vm02 bash[17013]: audit 2026-03-06T22:42:06.187771+0000 mon.vm02 (mon.0) 675 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:06.889 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:06 vm02 bash[17013]: audit 2026-03-06T22:42:06.194956+0000 mon.vm02 (mon.0) 676 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:06.889 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:06 vm02 bash[17013]: audit 2026-03-06T22:42:06.194956+0000 mon.vm02 (mon.0) 676 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:06.889 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:06 vm02 bash[17013]: audit 2026-03-06T22:42:06.199698+0000 mon.vm02 (mon.0) 677 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:06.889 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:06 vm02 bash[17013]: audit 2026-03-06T22:42:06.199698+0000 mon.vm02 (mon.0) 677 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:06.889 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:06 vm02 bash[17013]: audit 2026-03-06T22:42:06.208587+0000 mon.vm02 (mon.0) 678 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:06.889 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:06 vm02 bash[17013]: audit 2026-03-06T22:42:06.208587+0000 mon.vm02 (mon.0) 678 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:06.889 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:06 vm02 bash[17013]: audit 2026-03-06T22:42:06.209375+0000 mon.vm02 (mon.0) 679 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:42:06.889 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:06 vm02 bash[17013]: audit 2026-03-06T22:42:06.209375+0000 mon.vm02 (mon.0) 679 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:42:06.889 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:06 vm02 bash[17013]: audit 2026-03-06T22:42:06.209946+0000 mon.vm02 (mon.0) 680 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T23:42:06.889 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:06 vm02 bash[17013]: audit 2026-03-06T22:42:06.209946+0000 mon.vm02 (mon.0) 680 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T23:42:06.889 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:06 vm02 bash[17013]: cluster 2026-03-06T22:42:06.210763+0000 mgr.vm02.opvwec (mgr.14199) 174 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:06.889 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:06 vm02 bash[17013]: cluster 2026-03-06T22:42:06.210763+0000 mgr.vm02.opvwec (mgr.14199) 174 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:06.889 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:06 vm02 bash[17013]: audit 2026-03-06T22:42:06.213371+0000 mon.vm02 (mon.0) 681 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:06.889 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:06 vm02 bash[17013]: audit 2026-03-06T22:42:06.213371+0000 mon.vm02 (mon.0) 681 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:06.889 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:06 vm02 bash[17013]: audit 2026-03-06T22:42:06.214583+0000 mon.vm02 (mon.0) 682 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-06T23:42:06.889 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:06 vm02 bash[17013]: audit 2026-03-06T22:42:06.214583+0000 mon.vm02 (mon.0) 682 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-06T23:42:06.889 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:06 vm02 bash[17013]: cephadm 2026-03-06T22:42:06.226403+0000 mgr.vm02.opvwec (mgr.14199) 175 : cephadm [INF] Reconfiguring jaeger-agent.vm02 (dependencies changed)... 2026-03-06T23:42:06.889 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:06 vm02 bash[17013]: cephadm 2026-03-06T22:42:06.226403+0000 mgr.vm02.opvwec (mgr.14199) 175 : cephadm [INF] Reconfiguring jaeger-agent.vm02 (dependencies changed)... 2026-03-06T23:42:06.889 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:06 vm02 bash[17013]: cephadm 2026-03-06T22:42:06.226602+0000 mgr.vm02.opvwec (mgr.14199) 176 : cephadm [INF] Deploying daemon jaeger-agent.vm02 on vm02 2026-03-06T23:42:06.889 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:06 vm02 bash[17013]: cephadm 2026-03-06T22:42:06.226602+0000 mgr.vm02.opvwec (mgr.14199) 176 : cephadm [INF] Deploying daemon jaeger-agent.vm02 on vm02 2026-03-06T23:42:07.159 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:06 vm07 bash[20848]: audit 2026-03-06T22:42:05.804370+0000 mon.vm02 (mon.0) 674 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:42:07.159 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:06 vm07 bash[20848]: audit 2026-03-06T22:42:05.804370+0000 mon.vm02 (mon.0) 674 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:42:07.159 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:06 vm07 bash[20848]: audit 2026-03-06T22:42:06.187771+0000 mon.vm02 (mon.0) 675 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:07.159 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:06 vm07 bash[20848]: audit 2026-03-06T22:42:06.187771+0000 mon.vm02 (mon.0) 675 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:07.159 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:06 vm07 bash[20848]: audit 2026-03-06T22:42:06.194956+0000 mon.vm02 (mon.0) 676 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:07.159 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:06 vm07 bash[20848]: audit 2026-03-06T22:42:06.194956+0000 mon.vm02 (mon.0) 676 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:07.159 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:06 vm07 bash[20848]: audit 2026-03-06T22:42:06.199698+0000 mon.vm02 (mon.0) 677 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:07.159 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:06 vm07 bash[20848]: audit 2026-03-06T22:42:06.199698+0000 mon.vm02 (mon.0) 677 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:07.159 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:06 vm07 bash[20848]: audit 2026-03-06T22:42:06.208587+0000 mon.vm02 (mon.0) 678 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:07.159 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:06 vm07 bash[20848]: audit 2026-03-06T22:42:06.208587+0000 mon.vm02 (mon.0) 678 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:07.159 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:06 vm07 bash[20848]: audit 2026-03-06T22:42:06.209375+0000 mon.vm02 (mon.0) 679 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:42:07.159 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:06 vm07 bash[20848]: audit 2026-03-06T22:42:06.209375+0000 mon.vm02 (mon.0) 679 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:42:07.159 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:06 vm07 bash[20848]: audit 2026-03-06T22:42:06.209946+0000 mon.vm02 (mon.0) 680 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T23:42:07.159 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:06 vm07 bash[20848]: audit 2026-03-06T22:42:06.209946+0000 mon.vm02 (mon.0) 680 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T23:42:07.159 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:06 vm07 bash[20848]: cluster 2026-03-06T22:42:06.210763+0000 mgr.vm02.opvwec (mgr.14199) 174 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:07.159 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:06 vm07 bash[20848]: cluster 2026-03-06T22:42:06.210763+0000 mgr.vm02.opvwec (mgr.14199) 174 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:07.159 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:06 vm07 bash[20848]: audit 2026-03-06T22:42:06.213371+0000 mon.vm02 (mon.0) 681 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:07.159 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:06 vm07 bash[20848]: audit 2026-03-06T22:42:06.213371+0000 mon.vm02 (mon.0) 681 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:07.159 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:06 vm07 bash[20848]: audit 2026-03-06T22:42:06.214583+0000 mon.vm02 (mon.0) 682 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-06T23:42:07.159 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:06 vm07 bash[20848]: audit 2026-03-06T22:42:06.214583+0000 mon.vm02 (mon.0) 682 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-06T23:42:07.159 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:06 vm07 bash[20848]: cephadm 2026-03-06T22:42:06.226403+0000 mgr.vm02.opvwec (mgr.14199) 175 : cephadm [INF] Reconfiguring jaeger-agent.vm02 (dependencies changed)... 2026-03-06T23:42:07.159 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:06 vm07 bash[20848]: cephadm 2026-03-06T22:42:06.226403+0000 mgr.vm02.opvwec (mgr.14199) 175 : cephadm [INF] Reconfiguring jaeger-agent.vm02 (dependencies changed)... 2026-03-06T23:42:07.159 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:06 vm07 bash[20848]: cephadm 2026-03-06T22:42:06.226602+0000 mgr.vm02.opvwec (mgr.14199) 176 : cephadm [INF] Deploying daemon jaeger-agent.vm02 on vm02 2026-03-06T23:42:07.159 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:06 vm07 bash[20848]: cephadm 2026-03-06T22:42:06.226602+0000 mgr.vm02.opvwec (mgr.14199) 176 : cephadm [INF] Deploying daemon jaeger-agent.vm02 on vm02 2026-03-06T23:42:07.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:06 vm02 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:42:07.753 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:07 vm07 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:42:08.228 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:07 vm07 systemd[1]: /etc/systemd/system/ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-06T23:42:09.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:08 vm02 bash[17013]: audit 2026-03-06T22:42:07.018459+0000 mon.vm02 (mon.0) 683 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:09.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:08 vm02 bash[17013]: audit 2026-03-06T22:42:07.018459+0000 mon.vm02 (mon.0) 683 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:09.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:08 vm02 bash[17013]: audit 2026-03-06T22:42:07.035575+0000 mon.vm02 (mon.0) 684 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:09.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:08 vm02 bash[17013]: audit 2026-03-06T22:42:07.035575+0000 mon.vm02 (mon.0) 684 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:09.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:08 vm02 bash[17013]: cephadm 2026-03-06T22:42:07.036387+0000 mgr.vm02.opvwec (mgr.14199) 177 : cephadm [INF] Reconfiguring jaeger-agent.vm07 (dependencies changed)... 2026-03-06T23:42:09.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:08 vm02 bash[17013]: cephadm 2026-03-06T22:42:07.036387+0000 mgr.vm02.opvwec (mgr.14199) 177 : cephadm [INF] Reconfiguring jaeger-agent.vm07 (dependencies changed)... 2026-03-06T23:42:09.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:08 vm02 bash[17013]: cephadm 2026-03-06T22:42:07.036697+0000 mgr.vm02.opvwec (mgr.14199) 178 : cephadm [INF] Deploying daemon jaeger-agent.vm07 on vm07 2026-03-06T23:42:09.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:08 vm02 bash[17013]: cephadm 2026-03-06T22:42:07.036697+0000 mgr.vm02.opvwec (mgr.14199) 178 : cephadm [INF] Deploying daemon jaeger-agent.vm07 on vm07 2026-03-06T23:42:09.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:08 vm02 bash[17013]: cluster 2026-03-06T22:42:07.207999+0000 mon.vm02 (mon.0) 685 : cluster [WRN] Health check failed: 2 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON) 2026-03-06T23:42:09.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:08 vm02 bash[17013]: cluster 2026-03-06T22:42:07.207999+0000 mon.vm02 (mon.0) 685 : cluster [WRN] Health check failed: 2 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON) 2026-03-06T23:42:09.374 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:42:09.444 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:09 vm07 bash[20848]: audit 2026-03-06T22:42:07.018459+0000 mon.vm02 (mon.0) 683 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:09.444 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:09 vm07 bash[20848]: audit 2026-03-06T22:42:07.018459+0000 mon.vm02 (mon.0) 683 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:09.444 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:09 vm07 bash[20848]: audit 2026-03-06T22:42:07.035575+0000 mon.vm02 (mon.0) 684 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:09.444 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:09 vm07 bash[20848]: audit 2026-03-06T22:42:07.035575+0000 mon.vm02 (mon.0) 684 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:09.444 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:09 vm07 bash[20848]: cephadm 2026-03-06T22:42:07.036387+0000 mgr.vm02.opvwec (mgr.14199) 177 : cephadm [INF] Reconfiguring jaeger-agent.vm07 (dependencies changed)... 2026-03-06T23:42:09.444 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:09 vm07 bash[20848]: cephadm 2026-03-06T22:42:07.036387+0000 mgr.vm02.opvwec (mgr.14199) 177 : cephadm [INF] Reconfiguring jaeger-agent.vm07 (dependencies changed)... 2026-03-06T23:42:09.445 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:09 vm07 bash[20848]: cephadm 2026-03-06T22:42:07.036697+0000 mgr.vm02.opvwec (mgr.14199) 178 : cephadm [INF] Deploying daemon jaeger-agent.vm07 on vm07 2026-03-06T23:42:09.445 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:09 vm07 bash[20848]: cephadm 2026-03-06T22:42:07.036697+0000 mgr.vm02.opvwec (mgr.14199) 178 : cephadm [INF] Deploying daemon jaeger-agent.vm07 on vm07 2026-03-06T23:42:09.445 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:09 vm07 bash[20848]: cluster 2026-03-06T22:42:07.207999+0000 mon.vm02 (mon.0) 685 : cluster [WRN] Health check failed: 2 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON) 2026-03-06T23:42:09.445 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:09 vm07 bash[20848]: cluster 2026-03-06T22:42:07.207999+0000 mon.vm02 (mon.0) 685 : cluster [WRN] Health check failed: 2 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON) 2026-03-06T23:42:09.785 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:42:09.785 INFO:teuthology.orchestra.run.vm02.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-06T22:37:53.328214Z", "last_refresh": "2026-03-06T22:42:05.995488Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:38:58.838036Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-06T22:37:51.782353Z", "last_refresh": "2026-03-06T22:42:05.926461Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:38:59.749903Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-06T22:37:51.310617Z", "last_refresh": "2026-03-06T22:42:05.926417Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:41:58.959413Z service:elasticsearch [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "elasticsearch", "service_type": "elasticsearch", "status": {"created": "2026-03-06T22:41:51.116481Z", "last_refresh": "2026-03-06T22:42:05.926174Z", "ports": [9200], "running": 1, "size": 1}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-06T22:37:52.581776Z", "last_refresh": "2026-03-06T22:42:05.995459Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:41:58.188734Z service:jaeger-agent [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "jaeger-agent", "service_type": "jaeger-agent", "status": {"created": "2026-03-06T22:41:51.139431Z", "ports": [6799], "running": 0, "size": 2}}, {"events": ["2026-03-06T22:41:59.701265Z service:jaeger-collector [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "jaeger-collector", "service_type": "jaeger-collector", "status": {"created": "2026-03-06T22:41:51.121608Z", "last_refresh": "2026-03-06T22:42:05.995543Z", "ports": [14250], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:42:00.496940Z service:jaeger-query [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "jaeger-query", "service_type": "jaeger-query", "status": {"created": "2026-03-06T22:41:51.130859Z", "last_refresh": "2026-03-06T22:42:05.926269Z", "ports": [16686], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:39:01.334137Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-06T22:37:50.865444Z", "last_refresh": "2026-03-06T22:42:05.926321Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:02.384235Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm02:192.168.123.102=vm02", "vm07:192.168.123.107=vm07"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-06T22:38:51.745852Z", "last_refresh": "2026-03-06T22:42:05.925942Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:00.489648Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-06T22:37:52.956113Z", "last_refresh": "2026-03-06T22:42:05.926231Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:21.440144Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-06T22:39:21.434901Z", "last_refresh": "2026-03-06T22:42:05.926006Z", "running": 8, "size": 8}}, {"events": ["2026-03-06T22:39:02.388064Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-06T22:37:52.205852Z", "last_refresh": "2026-03-06T22:42:05.995600Z", "ports": [9095], "running": 1, "size": 1}}] 2026-03-06T23:42:09.858 INFO:tasks.cephadm:elasticsearch has 1/1 2026-03-06T23:42:09.858 INFO:teuthology.run_tasks:Running task cephadm.wait_for_service... 2026-03-06T23:42:09.860 INFO:tasks.cephadm:Waiting for ceph service jaeger-collector to start (timeout 300)... 2026-03-06T23:42:09.860 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph orch ls -f json 2026-03-06T23:42:10.228 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:09 vm07 bash[20848]: cluster 2026-03-06T22:42:08.211010+0000 mgr.vm02.opvwec (mgr.14199) 179 : cluster [DBG] pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:10.228 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:09 vm07 bash[20848]: cluster 2026-03-06T22:42:08.211010+0000 mgr.vm02.opvwec (mgr.14199) 179 : cluster [DBG] pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:10.229 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:09 vm07 bash[20848]: audit 2026-03-06T22:42:09.331556+0000 mon.vm02 (mon.0) 686 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:10.229 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:09 vm07 bash[20848]: audit 2026-03-06T22:42:09.331556+0000 mon.vm02 (mon.0) 686 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:10.229 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:09 vm07 bash[20848]: audit 2026-03-06T22:42:09.338193+0000 mon.vm02 (mon.0) 687 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:10.229 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:09 vm07 bash[20848]: audit 2026-03-06T22:42:09.338193+0000 mon.vm02 (mon.0) 687 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:10.229 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:09 vm07 bash[20848]: audit 2026-03-06T22:42:09.370413+0000 mon.vm02 (mon.0) 688 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:42:10.229 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:09 vm07 bash[20848]: audit 2026-03-06T22:42:09.370413+0000 mon.vm02 (mon.0) 688 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:42:10.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:09 vm02 bash[17013]: cluster 2026-03-06T22:42:08.211010+0000 mgr.vm02.opvwec (mgr.14199) 179 : cluster [DBG] pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:10.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:09 vm02 bash[17013]: cluster 2026-03-06T22:42:08.211010+0000 mgr.vm02.opvwec (mgr.14199) 179 : cluster [DBG] pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:10.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:09 vm02 bash[17013]: audit 2026-03-06T22:42:09.331556+0000 mon.vm02 (mon.0) 686 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:10.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:09 vm02 bash[17013]: audit 2026-03-06T22:42:09.331556+0000 mon.vm02 (mon.0) 686 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:10.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:09 vm02 bash[17013]: audit 2026-03-06T22:42:09.338193+0000 mon.vm02 (mon.0) 687 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:10.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:09 vm02 bash[17013]: audit 2026-03-06T22:42:09.338193+0000 mon.vm02 (mon.0) 687 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:10.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:09 vm02 bash[17013]: audit 2026-03-06T22:42:09.370413+0000 mon.vm02 (mon.0) 688 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:42:10.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:09 vm02 bash[17013]: audit 2026-03-06T22:42:09.370413+0000 mon.vm02 (mon.0) 688 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:42:10.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:10 vm02 bash[17013]: audit 2026-03-06T22:42:09.777633+0000 mgr.vm02.opvwec (mgr.14199) 180 : audit [DBG] from='client.14478 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:42:10.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:10 vm02 bash[17013]: audit 2026-03-06T22:42:09.777633+0000 mgr.vm02.opvwec (mgr.14199) 180 : audit [DBG] from='client.14478 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:42:10.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:10 vm02 bash[17013]: cluster 2026-03-06T22:42:10.211423+0000 mgr.vm02.opvwec (mgr.14199) 181 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:10.994 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:10 vm02 bash[17013]: cluster 2026-03-06T22:42:10.211423+0000 mgr.vm02.opvwec (mgr.14199) 181 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:11.228 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:10 vm07 bash[20848]: audit 2026-03-06T22:42:09.777633+0000 mgr.vm02.opvwec (mgr.14199) 180 : audit [DBG] from='client.14478 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:42:11.228 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:10 vm07 bash[20848]: audit 2026-03-06T22:42:09.777633+0000 mgr.vm02.opvwec (mgr.14199) 180 : audit [DBG] from='client.14478 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:42:11.228 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:10 vm07 bash[20848]: cluster 2026-03-06T22:42:10.211423+0000 mgr.vm02.opvwec (mgr.14199) 181 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:11.228 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:10 vm07 bash[20848]: cluster 2026-03-06T22:42:10.211423+0000 mgr.vm02.opvwec (mgr.14199) 181 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:13.980 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:13 vm07 bash[20848]: cluster 2026-03-06T22:42:12.211712+0000 mgr.vm02.opvwec (mgr.14199) 182 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:13.981 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:13 vm07 bash[20848]: cluster 2026-03-06T22:42:12.211712+0000 mgr.vm02.opvwec (mgr.14199) 182 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:13.992 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:13 vm02 bash[17013]: cluster 2026-03-06T22:42:12.211712+0000 mgr.vm02.opvwec (mgr.14199) 182 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:13.992 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:13 vm02 bash[17013]: cluster 2026-03-06T22:42:12.211712+0000 mgr.vm02.opvwec (mgr.14199) 182 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:14.422 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:42:14.795 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:42:14.795 INFO:teuthology.orchestra.run.vm02.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-06T22:37:53.328214Z", "last_refresh": "2026-03-06T22:42:14.061298Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:38:58.838036Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-06T22:37:51.782353Z", "last_refresh": "2026-03-06T22:42:05.926461Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:38:59.749903Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-06T22:37:51.310617Z", "last_refresh": "2026-03-06T22:42:05.926417Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:41:58.959413Z service:elasticsearch [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "elasticsearch", "service_type": "elasticsearch", "status": {"created": "2026-03-06T22:41:51.116481Z", "last_refresh": "2026-03-06T22:42:05.926174Z", "ports": [9200], "running": 1, "size": 1}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-06T22:37:52.581776Z", "last_refresh": "2026-03-06T22:42:14.061225Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:41:58.188734Z service:jaeger-agent [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "jaeger-agent", "service_type": "jaeger-agent", "status": {"created": "2026-03-06T22:41:51.139431Z", "ports": [6799], "running": 1, "size": 2}}, {"events": ["2026-03-06T22:41:59.701265Z service:jaeger-collector [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "jaeger-collector", "service_type": "jaeger-collector", "status": {"created": "2026-03-06T22:41:51.121608Z", "last_refresh": "2026-03-06T22:42:14.061456Z", "ports": [14250], "running": 0, "size": 1}}, {"events": ["2026-03-06T22:42:00.496940Z service:jaeger-query [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "jaeger-query", "service_type": "jaeger-query", "status": {"created": "2026-03-06T22:41:51.130859Z", "last_refresh": "2026-03-06T22:42:05.926269Z", "ports": [16686], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:39:01.334137Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-06T22:37:50.865444Z", "last_refresh": "2026-03-06T22:42:05.926321Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:02.384235Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm02:192.168.123.102=vm02", "vm07:192.168.123.107=vm07"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-06T22:38:51.745852Z", "last_refresh": "2026-03-06T22:42:05.925942Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:00.489648Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-06T22:37:52.956113Z", "last_refresh": "2026-03-06T22:42:05.926231Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:21.440144Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-06T22:39:21.434901Z", "last_refresh": "2026-03-06T22:42:05.926006Z", "running": 8, "size": 8}}, {"events": ["2026-03-06T22:39:02.388064Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-06T22:37:52.205852Z", "last_refresh": "2026-03-06T22:42:14.061750Z", "ports": [9095], "running": 1, "size": 1}}] 2026-03-06T23:42:14.866 INFO:tasks.cephadm:jaeger-collector has 0/1 2026-03-06T23:42:15.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:15 vm02 bash[17013]: audit 2026-03-06T22:42:14.067820+0000 mon.vm02 (mon.0) 689 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:15.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:15 vm02 bash[17013]: audit 2026-03-06T22:42:14.067820+0000 mon.vm02 (mon.0) 689 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:15.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:15 vm02 bash[17013]: audit 2026-03-06T22:42:14.074641+0000 mon.vm02 (mon.0) 690 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:15.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:15 vm02 bash[17013]: audit 2026-03-06T22:42:14.074641+0000 mon.vm02 (mon.0) 690 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:15.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:15 vm02 bash[17013]: cluster 2026-03-06T22:42:14.212015+0000 mgr.vm02.opvwec (mgr.14199) 183 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:15.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:15 vm02 bash[17013]: cluster 2026-03-06T22:42:14.212015+0000 mgr.vm02.opvwec (mgr.14199) 183 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:15.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:15 vm02 bash[17013]: audit 2026-03-06T22:42:14.892406+0000 mon.vm02 (mon.0) 691 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:15.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:15 vm02 bash[17013]: audit 2026-03-06T22:42:14.892406+0000 mon.vm02 (mon.0) 691 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:15.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:15 vm02 bash[17013]: audit 2026-03-06T22:42:14.897908+0000 mon.vm02 (mon.0) 692 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:15.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:15 vm02 bash[17013]: audit 2026-03-06T22:42:14.897908+0000 mon.vm02 (mon.0) 692 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:15.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:15 vm02 bash[17013]: audit 2026-03-06T22:42:14.898948+0000 mon.vm02 (mon.0) 693 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:42:15.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:15 vm02 bash[17013]: audit 2026-03-06T22:42:14.898948+0000 mon.vm02 (mon.0) 693 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:42:15.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:15 vm02 bash[17013]: audit 2026-03-06T22:42:14.899482+0000 mon.vm02 (mon.0) 694 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T23:42:15.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:15 vm02 bash[17013]: audit 2026-03-06T22:42:14.899482+0000 mon.vm02 (mon.0) 694 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T23:42:15.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:15 vm02 bash[17013]: audit 2026-03-06T22:42:14.903910+0000 mon.vm02 (mon.0) 695 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:15.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:15 vm02 bash[17013]: audit 2026-03-06T22:42:14.903910+0000 mon.vm02 (mon.0) 695 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:15.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:15 vm02 bash[17013]: audit 2026-03-06T22:42:14.905573+0000 mon.vm02 (mon.0) 696 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-06T23:42:15.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:15 vm02 bash[17013]: audit 2026-03-06T22:42:14.905573+0000 mon.vm02 (mon.0) 696 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-06T23:42:15.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:15 vm07 bash[20848]: audit 2026-03-06T22:42:14.067820+0000 mon.vm02 (mon.0) 689 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:15.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:15 vm07 bash[20848]: audit 2026-03-06T22:42:14.067820+0000 mon.vm02 (mon.0) 689 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:15.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:15 vm07 bash[20848]: audit 2026-03-06T22:42:14.074641+0000 mon.vm02 (mon.0) 690 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:15.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:15 vm07 bash[20848]: audit 2026-03-06T22:42:14.074641+0000 mon.vm02 (mon.0) 690 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:15.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:15 vm07 bash[20848]: cluster 2026-03-06T22:42:14.212015+0000 mgr.vm02.opvwec (mgr.14199) 183 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:15.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:15 vm07 bash[20848]: cluster 2026-03-06T22:42:14.212015+0000 mgr.vm02.opvwec (mgr.14199) 183 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:15.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:15 vm07 bash[20848]: audit 2026-03-06T22:42:14.892406+0000 mon.vm02 (mon.0) 691 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:15.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:15 vm07 bash[20848]: audit 2026-03-06T22:42:14.892406+0000 mon.vm02 (mon.0) 691 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:15.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:15 vm07 bash[20848]: audit 2026-03-06T22:42:14.897908+0000 mon.vm02 (mon.0) 692 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:15.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:15 vm07 bash[20848]: audit 2026-03-06T22:42:14.897908+0000 mon.vm02 (mon.0) 692 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:15.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:15 vm07 bash[20848]: audit 2026-03-06T22:42:14.898948+0000 mon.vm02 (mon.0) 693 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:42:15.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:15 vm07 bash[20848]: audit 2026-03-06T22:42:14.898948+0000 mon.vm02 (mon.0) 693 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:42:15.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:15 vm07 bash[20848]: audit 2026-03-06T22:42:14.899482+0000 mon.vm02 (mon.0) 694 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T23:42:15.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:15 vm07 bash[20848]: audit 2026-03-06T22:42:14.899482+0000 mon.vm02 (mon.0) 694 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T23:42:15.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:15 vm07 bash[20848]: audit 2026-03-06T22:42:14.903910+0000 mon.vm02 (mon.0) 695 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:15.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:15 vm07 bash[20848]: audit 2026-03-06T22:42:14.903910+0000 mon.vm02 (mon.0) 695 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:15.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:15 vm07 bash[20848]: audit 2026-03-06T22:42:14.905573+0000 mon.vm02 (mon.0) 696 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-06T23:42:15.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:15 vm07 bash[20848]: audit 2026-03-06T22:42:14.905573+0000 mon.vm02 (mon.0) 696 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-06T23:42:15.868 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph orch ls -f json 2026-03-06T23:42:16.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:16 vm02 bash[17013]: audit 2026-03-06T22:42:14.788600+0000 mgr.vm02.opvwec (mgr.14199) 184 : audit [DBG] from='client.14482 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:42:16.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:16 vm02 bash[17013]: audit 2026-03-06T22:42:14.788600+0000 mgr.vm02.opvwec (mgr.14199) 184 : audit [DBG] from='client.14482 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:42:16.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:16 vm02 bash[17013]: cluster 2026-03-06T22:42:14.900266+0000 mgr.vm02.opvwec (mgr.14199) 185 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:16.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:16 vm02 bash[17013]: cluster 2026-03-06T22:42:14.900266+0000 mgr.vm02.opvwec (mgr.14199) 185 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:16.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:16 vm02 bash[17013]: cluster 2026-03-06T22:42:14.900360+0000 mgr.vm02.opvwec (mgr.14199) 186 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:16.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:16 vm02 bash[17013]: cluster 2026-03-06T22:42:14.900360+0000 mgr.vm02.opvwec (mgr.14199) 186 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:16.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:16 vm07 bash[20848]: audit 2026-03-06T22:42:14.788600+0000 mgr.vm02.opvwec (mgr.14199) 184 : audit [DBG] from='client.14482 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:42:16.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:16 vm07 bash[20848]: audit 2026-03-06T22:42:14.788600+0000 mgr.vm02.opvwec (mgr.14199) 184 : audit [DBG] from='client.14482 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:42:16.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:16 vm07 bash[20848]: cluster 2026-03-06T22:42:14.900266+0000 mgr.vm02.opvwec (mgr.14199) 185 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:16.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:16 vm07 bash[20848]: cluster 2026-03-06T22:42:14.900266+0000 mgr.vm02.opvwec (mgr.14199) 185 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:16.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:16 vm07 bash[20848]: cluster 2026-03-06T22:42:14.900360+0000 mgr.vm02.opvwec (mgr.14199) 186 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:16.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:16 vm07 bash[20848]: cluster 2026-03-06T22:42:14.900360+0000 mgr.vm02.opvwec (mgr.14199) 186 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:18.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:18 vm07 bash[20848]: cluster 2026-03-06T22:42:16.900686+0000 mgr.vm02.opvwec (mgr.14199) 187 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:18.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:18 vm07 bash[20848]: cluster 2026-03-06T22:42:16.900686+0000 mgr.vm02.opvwec (mgr.14199) 187 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:18.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:18 vm02 bash[17013]: cluster 2026-03-06T22:42:16.900686+0000 mgr.vm02.opvwec (mgr.14199) 187 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:18.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:18 vm02 bash[17013]: cluster 2026-03-06T22:42:16.900686+0000 mgr.vm02.opvwec (mgr.14199) 187 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:20.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:20 vm07 bash[20848]: cluster 2026-03-06T22:42:18.901033+0000 mgr.vm02.opvwec (mgr.14199) 188 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:20.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:20 vm07 bash[20848]: cluster 2026-03-06T22:42:18.901033+0000 mgr.vm02.opvwec (mgr.14199) 188 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:20.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:20 vm02 bash[17013]: cluster 2026-03-06T22:42:18.901033+0000 mgr.vm02.opvwec (mgr.14199) 188 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:20.493 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:20 vm02 bash[17013]: cluster 2026-03-06T22:42:18.901033+0000 mgr.vm02.opvwec (mgr.14199) 188 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:20.664 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:42:21.044 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:42:21.044 INFO:teuthology.orchestra.run.vm02.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-06T22:37:53.328214Z", "last_refresh": "2026-03-06T22:42:14.061298Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:38:58.838036Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-06T22:37:51.782353Z", "last_refresh": "2026-03-06T22:42:14.061907Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:38:59.749903Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-06T22:37:51.310617Z", "last_refresh": "2026-03-06T22:42:14.061695Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:41:58.959413Z service:elasticsearch [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "elasticsearch", "service_type": "elasticsearch", "status": {"created": "2026-03-06T22:41:51.116481Z", "last_refresh": "2026-03-06T22:42:14.885961Z", "ports": [9200], "running": 1, "size": 1}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-06T22:37:52.581776Z", "last_refresh": "2026-03-06T22:42:14.061225Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:41:58.188734Z service:jaeger-agent [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "jaeger-agent", "service_type": "jaeger-agent", "status": {"created": "2026-03-06T22:41:51.139431Z", "last_refresh": "2026-03-06T22:42:14.061846Z", "ports": [6799], "running": 2, "size": 2}}, {"events": ["2026-03-06T22:41:59.701265Z service:jaeger-collector [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "jaeger-collector", "service_type": "jaeger-collector", "status": {"created": "2026-03-06T22:41:51.121608Z", "last_refresh": "2026-03-06T22:42:14.061456Z", "ports": [14250], "running": 0, "size": 1}}, {"events": ["2026-03-06T22:42:00.496940Z service:jaeger-query [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "jaeger-query", "service_type": "jaeger-query", "status": {"created": "2026-03-06T22:41:51.130859Z", "last_refresh": "2026-03-06T22:42:14.886008Z", "ports": [16686], "running": 0, "size": 1}}, {"events": ["2026-03-06T22:39:01.334137Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-06T22:37:50.865444Z", "last_refresh": "2026-03-06T22:42:14.061784Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:02.384235Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm02:192.168.123.102=vm02", "vm07:192.168.123.107=vm07"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-06T22:38:51.745852Z", "last_refresh": "2026-03-06T22:42:14.061394Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:00.489648Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-06T22:37:52.956113Z", "last_refresh": "2026-03-06T22:42:14.061815Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:21.440144Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-06T22:39:21.434901Z", "last_refresh": "2026-03-06T22:42:14.060931Z", "running": 8, "size": 8}}, {"events": ["2026-03-06T22:39:02.388064Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-06T22:37:52.205852Z", "last_refresh": "2026-03-06T22:42:14.061750Z", "ports": [9095], "running": 1, "size": 1}}] 2026-03-06T23:42:21.106 INFO:tasks.cephadm:jaeger-collector has 0/1 2026-03-06T23:42:21.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:21 vm02 bash[17013]: audit 2026-03-06T22:42:20.805532+0000 mon.vm02 (mon.0) 697 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:21.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:21 vm02 bash[17013]: audit 2026-03-06T22:42:20.805532+0000 mon.vm02 (mon.0) 697 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:21.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:21 vm02 bash[17013]: audit 2026-03-06T22:42:20.806119+0000 mon.vm02 (mon.0) 698 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:42:21.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:21 vm02 bash[17013]: audit 2026-03-06T22:42:20.806119+0000 mon.vm02 (mon.0) 698 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:42:21.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:21 vm02 bash[17013]: cluster 2026-03-06T22:42:20.901303+0000 mgr.vm02.opvwec (mgr.14199) 189 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:21.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:21 vm02 bash[17013]: cluster 2026-03-06T22:42:20.901303+0000 mgr.vm02.opvwec (mgr.14199) 189 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:21.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:21 vm02 bash[17013]: audit 2026-03-06T22:42:21.036673+0000 mgr.vm02.opvwec (mgr.14199) 190 : audit [DBG] from='client.14486 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:42:21.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:21 vm02 bash[17013]: audit 2026-03-06T22:42:21.036673+0000 mgr.vm02.opvwec (mgr.14199) 190 : audit [DBG] from='client.14486 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:42:22.106 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph orch ls -f json 2026-03-06T23:42:22.228 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:21 vm07 bash[20848]: audit 2026-03-06T22:42:20.805532+0000 mon.vm02 (mon.0) 697 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:22.228 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:21 vm07 bash[20848]: audit 2026-03-06T22:42:20.805532+0000 mon.vm02 (mon.0) 697 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:42:22.228 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:21 vm07 bash[20848]: audit 2026-03-06T22:42:20.806119+0000 mon.vm02 (mon.0) 698 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:42:22.228 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:21 vm07 bash[20848]: audit 2026-03-06T22:42:20.806119+0000 mon.vm02 (mon.0) 698 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:42:22.228 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:21 vm07 bash[20848]: cluster 2026-03-06T22:42:20.901303+0000 mgr.vm02.opvwec (mgr.14199) 189 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:22.228 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:21 vm07 bash[20848]: cluster 2026-03-06T22:42:20.901303+0000 mgr.vm02.opvwec (mgr.14199) 189 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:22.228 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:21 vm07 bash[20848]: audit 2026-03-06T22:42:21.036673+0000 mgr.vm02.opvwec (mgr.14199) 190 : audit [DBG] from='client.14486 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:42:22.229 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:21 vm07 bash[20848]: audit 2026-03-06T22:42:21.036673+0000 mgr.vm02.opvwec (mgr.14199) 190 : audit [DBG] from='client.14486 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:42:24.214 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:23 vm02 bash[17013]: cluster 2026-03-06T22:42:22.901627+0000 mgr.vm02.opvwec (mgr.14199) 191 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:24.214 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:23 vm02 bash[17013]: cluster 2026-03-06T22:42:22.901627+0000 mgr.vm02.opvwec (mgr.14199) 191 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:24.228 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:23 vm07 bash[20848]: cluster 2026-03-06T22:42:22.901627+0000 mgr.vm02.opvwec (mgr.14199) 191 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:24.228 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:23 vm07 bash[20848]: cluster 2026-03-06T22:42:22.901627+0000 mgr.vm02.opvwec (mgr.14199) 191 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:26.228 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:25 vm07 bash[20848]: cluster 2026-03-06T22:42:24.901891+0000 mgr.vm02.opvwec (mgr.14199) 192 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:26.228 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:25 vm07 bash[20848]: cluster 2026-03-06T22:42:24.901891+0000 mgr.vm02.opvwec (mgr.14199) 192 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:26.242 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:25 vm02 bash[17013]: cluster 2026-03-06T22:42:24.901891+0000 mgr.vm02.opvwec (mgr.14199) 192 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:26.242 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:25 vm02 bash[17013]: cluster 2026-03-06T22:42:24.901891+0000 mgr.vm02.opvwec (mgr.14199) 192 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:26.909 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:42:27.276 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:42:27.276 INFO:teuthology.orchestra.run.vm02.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-06T22:37:53.328214Z", "last_refresh": "2026-03-06T22:42:14.061298Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:38:58.838036Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-06T22:37:51.782353Z", "last_refresh": "2026-03-06T22:42:14.061907Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:38:59.749903Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-06T22:37:51.310617Z", "last_refresh": "2026-03-06T22:42:14.061695Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:41:58.959413Z service:elasticsearch [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "elasticsearch", "service_type": "elasticsearch", "status": {"created": "2026-03-06T22:41:51.116481Z", "last_refresh": "2026-03-06T22:42:14.885961Z", "ports": [9200], "running": 1, "size": 1}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-06T22:37:52.581776Z", "last_refresh": "2026-03-06T22:42:14.061225Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:41:58.188734Z service:jaeger-agent [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "jaeger-agent", "service_type": "jaeger-agent", "status": {"created": "2026-03-06T22:41:51.139431Z", "last_refresh": "2026-03-06T22:42:14.061846Z", "ports": [6799], "running": 2, "size": 2}}, {"events": ["2026-03-06T22:41:59.701265Z service:jaeger-collector [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "jaeger-collector", "service_type": "jaeger-collector", "status": {"created": "2026-03-06T22:41:51.121608Z", "last_refresh": "2026-03-06T22:42:14.061456Z", "ports": [14250], "running": 0, "size": 1}}, {"events": ["2026-03-06T22:42:00.496940Z service:jaeger-query [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "jaeger-query", "service_type": "jaeger-query", "status": {"created": "2026-03-06T22:41:51.130859Z", "last_refresh": "2026-03-06T22:42:14.886008Z", "ports": [16686], "running": 0, "size": 1}}, {"events": ["2026-03-06T22:39:01.334137Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-06T22:37:50.865444Z", "last_refresh": "2026-03-06T22:42:14.061784Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:02.384235Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm02:192.168.123.102=vm02", "vm07:192.168.123.107=vm07"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-06T22:38:51.745852Z", "last_refresh": "2026-03-06T22:42:14.061394Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:00.489648Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-06T22:37:52.956113Z", "last_refresh": "2026-03-06T22:42:14.061815Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:21.440144Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-06T22:39:21.434901Z", "last_refresh": "2026-03-06T22:42:14.060931Z", "running": 8, "size": 8}}, {"events": ["2026-03-06T22:39:02.388064Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-06T22:37:52.205852Z", "last_refresh": "2026-03-06T22:42:14.061750Z", "ports": [9095], "running": 1, "size": 1}}] 2026-03-06T23:42:27.342 INFO:tasks.cephadm:jaeger-collector has 0/1 2026-03-06T23:42:28.228 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:27 vm07 bash[20848]: cluster 2026-03-06T22:42:26.902193+0000 mgr.vm02.opvwec (mgr.14199) 193 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:28.228 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:27 vm07 bash[20848]: cluster 2026-03-06T22:42:26.902193+0000 mgr.vm02.opvwec (mgr.14199) 193 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:28.228 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:27 vm07 bash[20848]: audit 2026-03-06T22:42:27.269581+0000 mgr.vm02.opvwec (mgr.14199) 194 : audit [DBG] from='client.14490 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:42:28.228 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:27 vm07 bash[20848]: audit 2026-03-06T22:42:27.269581+0000 mgr.vm02.opvwec (mgr.14199) 194 : audit [DBG] from='client.14490 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:42:28.242 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:27 vm02 bash[17013]: cluster 2026-03-06T22:42:26.902193+0000 mgr.vm02.opvwec (mgr.14199) 193 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:28.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:27 vm02 bash[17013]: cluster 2026-03-06T22:42:26.902193+0000 mgr.vm02.opvwec (mgr.14199) 193 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:28.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:27 vm02 bash[17013]: audit 2026-03-06T22:42:27.269581+0000 mgr.vm02.opvwec (mgr.14199) 194 : audit [DBG] from='client.14490 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:42:28.243 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:27 vm02 bash[17013]: audit 2026-03-06T22:42:27.269581+0000 mgr.vm02.opvwec (mgr.14199) 194 : audit [DBG] from='client.14490 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:42:28.343 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph orch ls -f json 2026-03-06T23:42:30.242 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:29 vm02 bash[17013]: cluster 2026-03-06T22:42:28.902513+0000 mgr.vm02.opvwec (mgr.14199) 195 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:30.242 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:29 vm02 bash[17013]: cluster 2026-03-06T22:42:28.902513+0000 mgr.vm02.opvwec (mgr.14199) 195 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:30.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:29 vm07 bash[20848]: cluster 2026-03-06T22:42:28.902513+0000 mgr.vm02.opvwec (mgr.14199) 195 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:30.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:29 vm07 bash[20848]: cluster 2026-03-06T22:42:28.902513+0000 mgr.vm02.opvwec (mgr.14199) 195 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:32.242 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:31 vm02 bash[17013]: cluster 2026-03-06T22:42:30.902759+0000 mgr.vm02.opvwec (mgr.14199) 196 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:32.242 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:31 vm02 bash[17013]: cluster 2026-03-06T22:42:30.902759+0000 mgr.vm02.opvwec (mgr.14199) 196 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:32.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:31 vm07 bash[20848]: cluster 2026-03-06T22:42:30.902759+0000 mgr.vm02.opvwec (mgr.14199) 196 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:32.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:31 vm07 bash[20848]: cluster 2026-03-06T22:42:30.902759+0000 mgr.vm02.opvwec (mgr.14199) 196 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:33.129 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:42:33.492 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:42:33.492 INFO:teuthology.orchestra.run.vm02.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-06T22:37:53.328214Z", "last_refresh": "2026-03-06T22:42:14.061298Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:38:58.838036Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-06T22:37:51.782353Z", "last_refresh": "2026-03-06T22:42:14.061907Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:38:59.749903Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-06T22:37:51.310617Z", "last_refresh": "2026-03-06T22:42:14.061695Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:41:58.959413Z service:elasticsearch [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "elasticsearch", "service_type": "elasticsearch", "status": {"created": "2026-03-06T22:41:51.116481Z", "last_refresh": "2026-03-06T22:42:14.885961Z", "ports": [9200], "running": 1, "size": 1}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-06T22:37:52.581776Z", "last_refresh": "2026-03-06T22:42:14.061225Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:41:58.188734Z service:jaeger-agent [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "jaeger-agent", "service_type": "jaeger-agent", "status": {"created": "2026-03-06T22:41:51.139431Z", "last_refresh": "2026-03-06T22:42:14.061846Z", "ports": [6799], "running": 2, "size": 2}}, {"events": ["2026-03-06T22:41:59.701265Z service:jaeger-collector [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "jaeger-collector", "service_type": "jaeger-collector", "status": {"created": "2026-03-06T22:41:51.121608Z", "last_refresh": "2026-03-06T22:42:14.061456Z", "ports": [14250], "running": 0, "size": 1}}, {"events": ["2026-03-06T22:42:00.496940Z service:jaeger-query [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "jaeger-query", "service_type": "jaeger-query", "status": {"created": "2026-03-06T22:41:51.130859Z", "last_refresh": "2026-03-06T22:42:14.886008Z", "ports": [16686], "running": 0, "size": 1}}, {"events": ["2026-03-06T22:39:01.334137Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-06T22:37:50.865444Z", "last_refresh": "2026-03-06T22:42:14.061784Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:02.384235Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm02:192.168.123.102=vm02", "vm07:192.168.123.107=vm07"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-06T22:38:51.745852Z", "last_refresh": "2026-03-06T22:42:14.061394Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:00.489648Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-06T22:37:52.956113Z", "last_refresh": "2026-03-06T22:42:14.061815Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:21.440144Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-06T22:39:21.434901Z", "last_refresh": "2026-03-06T22:42:14.060931Z", "running": 8, "size": 8}}, {"events": ["2026-03-06T22:39:02.388064Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-06T22:37:52.205852Z", "last_refresh": "2026-03-06T22:42:14.061750Z", "ports": [9095], "running": 1, "size": 1}}] 2026-03-06T23:42:33.551 INFO:tasks.cephadm:jaeger-collector has 0/1 2026-03-06T23:42:34.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:33 vm07 bash[20848]: cluster 2026-03-06T22:42:32.903001+0000 mgr.vm02.opvwec (mgr.14199) 197 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:34.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:33 vm07 bash[20848]: cluster 2026-03-06T22:42:32.903001+0000 mgr.vm02.opvwec (mgr.14199) 197 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:34.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:33 vm07 bash[20848]: audit 2026-03-06T22:42:33.486015+0000 mgr.vm02.opvwec (mgr.14199) 198 : audit [DBG] from='client.14494 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:42:34.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:33 vm07 bash[20848]: audit 2026-03-06T22:42:33.486015+0000 mgr.vm02.opvwec (mgr.14199) 198 : audit [DBG] from='client.14494 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:42:34.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:33 vm02 bash[17013]: cluster 2026-03-06T22:42:32.903001+0000 mgr.vm02.opvwec (mgr.14199) 197 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:34.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:33 vm02 bash[17013]: cluster 2026-03-06T22:42:32.903001+0000 mgr.vm02.opvwec (mgr.14199) 197 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:34.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:33 vm02 bash[17013]: audit 2026-03-06T22:42:33.486015+0000 mgr.vm02.opvwec (mgr.14199) 198 : audit [DBG] from='client.14494 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:42:34.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:33 vm02 bash[17013]: audit 2026-03-06T22:42:33.486015+0000 mgr.vm02.opvwec (mgr.14199) 198 : audit [DBG] from='client.14494 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:42:34.552 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph orch ls -f json 2026-03-06T23:42:36.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:36 vm07 bash[20848]: cluster 2026-03-06T22:42:34.903240+0000 mgr.vm02.opvwec (mgr.14199) 199 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:36.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:36 vm07 bash[20848]: cluster 2026-03-06T22:42:34.903240+0000 mgr.vm02.opvwec (mgr.14199) 199 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:36.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:36 vm07 bash[20848]: audit 2026-03-06T22:42:35.801998+0000 mon.vm02 (mon.0) 699 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:42:36.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:36 vm07 bash[20848]: audit 2026-03-06T22:42:35.801998+0000 mon.vm02 (mon.0) 699 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:42:36.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:36 vm02 bash[17013]: cluster 2026-03-06T22:42:34.903240+0000 mgr.vm02.opvwec (mgr.14199) 199 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:36.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:36 vm02 bash[17013]: cluster 2026-03-06T22:42:34.903240+0000 mgr.vm02.opvwec (mgr.14199) 199 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:36.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:36 vm02 bash[17013]: audit 2026-03-06T22:42:35.801998+0000 mon.vm02 (mon.0) 699 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:42:36.492 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:36 vm02 bash[17013]: audit 2026-03-06T22:42:35.801998+0000 mon.vm02 (mon.0) 699 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:42:38.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:38 vm07 bash[20848]: cluster 2026-03-06T22:42:36.903527+0000 mgr.vm02.opvwec (mgr.14199) 200 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:38.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:38 vm07 bash[20848]: cluster 2026-03-06T22:42:36.903527+0000 mgr.vm02.opvwec (mgr.14199) 200 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:38.491 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:38 vm02 bash[17013]: cluster 2026-03-06T22:42:36.903527+0000 mgr.vm02.opvwec (mgr.14199) 200 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:38.491 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:38 vm02 bash[17013]: cluster 2026-03-06T22:42:36.903527+0000 mgr.vm02.opvwec (mgr.14199) 200 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:39.323 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:42:39.660 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:42:39.660 INFO:teuthology.orchestra.run.vm02.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-06T22:37:53.328214Z", "last_refresh": "2026-03-06T22:42:14.061298Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:38:58.838036Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-06T22:37:51.782353Z", "last_refresh": "2026-03-06T22:42:14.061907Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:38:59.749903Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-06T22:37:51.310617Z", "last_refresh": "2026-03-06T22:42:14.061695Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:41:58.959413Z service:elasticsearch [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "elasticsearch", "service_type": "elasticsearch", "status": {"created": "2026-03-06T22:41:51.116481Z", "last_refresh": "2026-03-06T22:42:14.885961Z", "ports": [9200], "running": 1, "size": 1}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-06T22:37:52.581776Z", "last_refresh": "2026-03-06T22:42:14.061225Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:41:58.188734Z service:jaeger-agent [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "jaeger-agent", "service_type": "jaeger-agent", "status": {"created": "2026-03-06T22:41:51.139431Z", "last_refresh": "2026-03-06T22:42:14.061846Z", "ports": [6799], "running": 2, "size": 2}}, {"events": ["2026-03-06T22:41:59.701265Z service:jaeger-collector [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "jaeger-collector", "service_type": "jaeger-collector", "status": {"created": "2026-03-06T22:41:51.121608Z", "last_refresh": "2026-03-06T22:42:14.061456Z", "ports": [14250], "running": 0, "size": 1}}, {"events": ["2026-03-06T22:42:00.496940Z service:jaeger-query [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "jaeger-query", "service_type": "jaeger-query", "status": {"created": "2026-03-06T22:41:51.130859Z", "last_refresh": "2026-03-06T22:42:14.886008Z", "ports": [16686], "running": 0, "size": 1}}, {"events": ["2026-03-06T22:39:01.334137Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-06T22:37:50.865444Z", "last_refresh": "2026-03-06T22:42:14.061784Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:02.384235Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm02:192.168.123.102=vm02", "vm07:192.168.123.107=vm07"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-06T22:38:51.745852Z", "last_refresh": "2026-03-06T22:42:14.061394Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:00.489648Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-06T22:37:52.956113Z", "last_refresh": "2026-03-06T22:42:14.061815Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:21.440144Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-06T22:39:21.434901Z", "last_refresh": "2026-03-06T22:42:14.060931Z", "running": 8, "size": 8}}, {"events": ["2026-03-06T22:39:02.388064Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-06T22:37:52.205852Z", "last_refresh": "2026-03-06T22:42:14.061750Z", "ports": [9095], "running": 1, "size": 1}}] 2026-03-06T23:42:39.715 INFO:tasks.cephadm:jaeger-collector has 0/1 2026-03-06T23:42:40.335 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:40 vm02 bash[17013]: cluster 2026-03-06T22:42:38.903828+0000 mgr.vm02.opvwec (mgr.14199) 201 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:40.335 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:40 vm02 bash[17013]: cluster 2026-03-06T22:42:38.903828+0000 mgr.vm02.opvwec (mgr.14199) 201 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:40.335 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:40 vm02 bash[17013]: audit 2026-03-06T22:42:39.654621+0000 mgr.vm02.opvwec (mgr.14199) 202 : audit [DBG] from='client.14498 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:42:40.335 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:40 vm02 bash[17013]: audit 2026-03-06T22:42:39.654621+0000 mgr.vm02.opvwec (mgr.14199) 202 : audit [DBG] from='client.14498 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:42:40.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:40 vm07 bash[20848]: cluster 2026-03-06T22:42:38.903828+0000 mgr.vm02.opvwec (mgr.14199) 201 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:40.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:40 vm07 bash[20848]: cluster 2026-03-06T22:42:38.903828+0000 mgr.vm02.opvwec (mgr.14199) 201 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:40.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:40 vm07 bash[20848]: audit 2026-03-06T22:42:39.654621+0000 mgr.vm02.opvwec (mgr.14199) 202 : audit [DBG] from='client.14498 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:42:40.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:40 vm07 bash[20848]: audit 2026-03-06T22:42:39.654621+0000 mgr.vm02.opvwec (mgr.14199) 202 : audit [DBG] from='client.14498 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:42:40.716 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph orch ls -f json 2026-03-06T23:42:42.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:42 vm07 bash[20848]: cluster 2026-03-06T22:42:40.904102+0000 mgr.vm02.opvwec (mgr.14199) 203 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:42.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:42 vm07 bash[20848]: cluster 2026-03-06T22:42:40.904102+0000 mgr.vm02.opvwec (mgr.14199) 203 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:42.491 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:42 vm02 bash[17013]: cluster 2026-03-06T22:42:40.904102+0000 mgr.vm02.opvwec (mgr.14199) 203 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:42.491 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:42 vm02 bash[17013]: cluster 2026-03-06T22:42:40.904102+0000 mgr.vm02.opvwec (mgr.14199) 203 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:44.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:44 vm07 bash[20848]: cluster 2026-03-06T22:42:42.904372+0000 mgr.vm02.opvwec (mgr.14199) 204 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:44.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:44 vm07 bash[20848]: cluster 2026-03-06T22:42:42.904372+0000 mgr.vm02.opvwec (mgr.14199) 204 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:44.490 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:44 vm02 bash[17013]: cluster 2026-03-06T22:42:42.904372+0000 mgr.vm02.opvwec (mgr.14199) 204 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:44.490 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:44 vm02 bash[17013]: cluster 2026-03-06T22:42:42.904372+0000 mgr.vm02.opvwec (mgr.14199) 204 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:45.486 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:42:45.935 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:42:45.935 INFO:teuthology.orchestra.run.vm02.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-06T22:37:53.328214Z", "last_refresh": "2026-03-06T22:42:14.061298Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:38:58.838036Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-06T22:37:51.782353Z", "last_refresh": "2026-03-06T22:42:14.061907Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:38:59.749903Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-06T22:37:51.310617Z", "last_refresh": "2026-03-06T22:42:14.061695Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:41:58.959413Z service:elasticsearch [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "elasticsearch", "service_type": "elasticsearch", "status": {"created": "2026-03-06T22:41:51.116481Z", "last_refresh": "2026-03-06T22:42:14.885961Z", "ports": [9200], "running": 1, "size": 1}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-06T22:37:52.581776Z", "last_refresh": "2026-03-06T22:42:14.061225Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:41:58.188734Z service:jaeger-agent [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "jaeger-agent", "service_type": "jaeger-agent", "status": {"created": "2026-03-06T22:41:51.139431Z", "last_refresh": "2026-03-06T22:42:14.061846Z", "ports": [6799], "running": 2, "size": 2}}, {"events": ["2026-03-06T22:41:59.701265Z service:jaeger-collector [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "jaeger-collector", "service_type": "jaeger-collector", "status": {"created": "2026-03-06T22:41:51.121608Z", "last_refresh": "2026-03-06T22:42:14.061456Z", "ports": [14250], "running": 0, "size": 1}}, {"events": ["2026-03-06T22:42:00.496940Z service:jaeger-query [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "jaeger-query", "service_type": "jaeger-query", "status": {"created": "2026-03-06T22:41:51.130859Z", "last_refresh": "2026-03-06T22:42:14.886008Z", "ports": [16686], "running": 0, "size": 1}}, {"events": ["2026-03-06T22:39:01.334137Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-06T22:37:50.865444Z", "last_refresh": "2026-03-06T22:42:14.061784Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:02.384235Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm02:192.168.123.102=vm02", "vm07:192.168.123.107=vm07"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-06T22:38:51.745852Z", "last_refresh": "2026-03-06T22:42:14.061394Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:00.489648Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-06T22:37:52.956113Z", "last_refresh": "2026-03-06T22:42:14.061815Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:21.440144Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-06T22:39:21.434901Z", "last_refresh": "2026-03-06T22:42:14.060931Z", "running": 8, "size": 8}}, {"events": ["2026-03-06T22:39:02.388064Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-06T22:37:52.205852Z", "last_refresh": "2026-03-06T22:42:14.061750Z", "ports": [9095], "running": 1, "size": 1}}] 2026-03-06T23:42:45.996 INFO:tasks.cephadm:jaeger-collector has 0/1 2026-03-06T23:42:46.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:46 vm07 bash[20848]: cluster 2026-03-06T22:42:44.904649+0000 mgr.vm02.opvwec (mgr.14199) 205 : cluster [DBG] pgmap v131: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:46.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:46 vm07 bash[20848]: cluster 2026-03-06T22:42:44.904649+0000 mgr.vm02.opvwec (mgr.14199) 205 : cluster [DBG] pgmap v131: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:46.740 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:46 vm02 bash[17013]: cluster 2026-03-06T22:42:44.904649+0000 mgr.vm02.opvwec (mgr.14199) 205 : cluster [DBG] pgmap v131: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:46.740 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:46 vm02 bash[17013]: cluster 2026-03-06T22:42:44.904649+0000 mgr.vm02.opvwec (mgr.14199) 205 : cluster [DBG] pgmap v131: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:46.997 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph orch ls -f json 2026-03-06T23:42:47.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:47 vm07 bash[20848]: audit 2026-03-06T22:42:45.929654+0000 mgr.vm02.opvwec (mgr.14199) 206 : audit [DBG] from='client.14502 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:42:47.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:47 vm07 bash[20848]: audit 2026-03-06T22:42:45.929654+0000 mgr.vm02.opvwec (mgr.14199) 206 : audit [DBG] from='client.14502 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:42:47.740 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:47 vm02 bash[17013]: audit 2026-03-06T22:42:45.929654+0000 mgr.vm02.opvwec (mgr.14199) 206 : audit [DBG] from='client.14502 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:42:47.740 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:47 vm02 bash[17013]: audit 2026-03-06T22:42:45.929654+0000 mgr.vm02.opvwec (mgr.14199) 206 : audit [DBG] from='client.14502 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:42:48.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:48 vm07 bash[20848]: cluster 2026-03-06T22:42:46.904927+0000 mgr.vm02.opvwec (mgr.14199) 207 : cluster [DBG] pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:48.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:48 vm07 bash[20848]: cluster 2026-03-06T22:42:46.904927+0000 mgr.vm02.opvwec (mgr.14199) 207 : cluster [DBG] pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:48.740 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:48 vm02 bash[17013]: cluster 2026-03-06T22:42:46.904927+0000 mgr.vm02.opvwec (mgr.14199) 207 : cluster [DBG] pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:48.740 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:48 vm02 bash[17013]: cluster 2026-03-06T22:42:46.904927+0000 mgr.vm02.opvwec (mgr.14199) 207 : cluster [DBG] pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:50.642 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:50 vm02 bash[17013]: cluster 2026-03-06T22:42:48.905177+0000 mgr.vm02.opvwec (mgr.14199) 208 : cluster [DBG] pgmap v133: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:50.642 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:50 vm02 bash[17013]: cluster 2026-03-06T22:42:48.905177+0000 mgr.vm02.opvwec (mgr.14199) 208 : cluster [DBG] pgmap v133: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:50.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:50 vm07 bash[20848]: cluster 2026-03-06T22:42:48.905177+0000 mgr.vm02.opvwec (mgr.14199) 208 : cluster [DBG] pgmap v133: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:50.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:50 vm07 bash[20848]: cluster 2026-03-06T22:42:48.905177+0000 mgr.vm02.opvwec (mgr.14199) 208 : cluster [DBG] pgmap v133: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:51.740 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:51 vm02 bash[17013]: audit 2026-03-06T22:42:50.802054+0000 mon.vm02 (mon.0) 700 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:42:51.740 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:51 vm02 bash[17013]: audit 2026-03-06T22:42:50.802054+0000 mon.vm02 (mon.0) 700 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:42:51.778 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:42:51.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:51 vm07 bash[20848]: audit 2026-03-06T22:42:50.802054+0000 mon.vm02 (mon.0) 700 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:42:51.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:51 vm07 bash[20848]: audit 2026-03-06T22:42:50.802054+0000 mon.vm02 (mon.0) 700 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:42:52.136 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:42:52.136 INFO:teuthology.orchestra.run.vm02.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-06T22:37:53.328214Z", "last_refresh": "2026-03-06T22:42:14.061298Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:38:58.838036Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-06T22:37:51.782353Z", "last_refresh": "2026-03-06T22:42:14.061907Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:38:59.749903Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-06T22:37:51.310617Z", "last_refresh": "2026-03-06T22:42:14.061695Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:41:58.959413Z service:elasticsearch [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "elasticsearch", "service_type": "elasticsearch", "status": {"created": "2026-03-06T22:41:51.116481Z", "last_refresh": "2026-03-06T22:42:14.885961Z", "ports": [9200], "running": 1, "size": 1}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-06T22:37:52.581776Z", "last_refresh": "2026-03-06T22:42:14.061225Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:41:58.188734Z service:jaeger-agent [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "jaeger-agent", "service_type": "jaeger-agent", "status": {"created": "2026-03-06T22:41:51.139431Z", "last_refresh": "2026-03-06T22:42:14.061846Z", "ports": [6799], "running": 2, "size": 2}}, {"events": ["2026-03-06T22:41:59.701265Z service:jaeger-collector [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "jaeger-collector", "service_type": "jaeger-collector", "status": {"created": "2026-03-06T22:41:51.121608Z", "last_refresh": "2026-03-06T22:42:14.061456Z", "ports": [14250], "running": 0, "size": 1}}, {"events": ["2026-03-06T22:42:00.496940Z service:jaeger-query [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "jaeger-query", "service_type": "jaeger-query", "status": {"created": "2026-03-06T22:41:51.130859Z", "last_refresh": "2026-03-06T22:42:14.886008Z", "ports": [16686], "running": 0, "size": 1}}, {"events": ["2026-03-06T22:39:01.334137Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-06T22:37:50.865444Z", "last_refresh": "2026-03-06T22:42:14.061784Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:02.384235Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm02:192.168.123.102=vm02", "vm07:192.168.123.107=vm07"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-06T22:38:51.745852Z", "last_refresh": "2026-03-06T22:42:14.061394Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:00.489648Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-06T22:37:52.956113Z", "last_refresh": "2026-03-06T22:42:14.061815Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:21.440144Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-06T22:39:21.434901Z", "last_refresh": "2026-03-06T22:42:14.060931Z", "running": 8, "size": 8}}, {"events": ["2026-03-06T22:39:02.388064Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-06T22:37:52.205852Z", "last_refresh": "2026-03-06T22:42:14.061750Z", "ports": [9095], "running": 1, "size": 1}}] 2026-03-06T23:42:52.197 INFO:tasks.cephadm:jaeger-collector has 0/1 2026-03-06T23:42:52.740 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:52 vm02 bash[17013]: cluster 2026-03-06T22:42:50.905395+0000 mgr.vm02.opvwec (mgr.14199) 209 : cluster [DBG] pgmap v134: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:52.740 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:52 vm02 bash[17013]: cluster 2026-03-06T22:42:50.905395+0000 mgr.vm02.opvwec (mgr.14199) 209 : cluster [DBG] pgmap v134: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:52.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:52 vm07 bash[20848]: cluster 2026-03-06T22:42:50.905395+0000 mgr.vm02.opvwec (mgr.14199) 209 : cluster [DBG] pgmap v134: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:52.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:52 vm07 bash[20848]: cluster 2026-03-06T22:42:50.905395+0000 mgr.vm02.opvwec (mgr.14199) 209 : cluster [DBG] pgmap v134: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:53.198 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph orch ls -f json 2026-03-06T23:42:53.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:53 vm07 bash[20848]: audit 2026-03-06T22:42:52.131942+0000 mgr.vm02.opvwec (mgr.14199) 210 : audit [DBG] from='client.14506 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:42:53.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:53 vm07 bash[20848]: audit 2026-03-06T22:42:52.131942+0000 mgr.vm02.opvwec (mgr.14199) 210 : audit [DBG] from='client.14506 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:42:53.990 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:53 vm02 bash[17013]: audit 2026-03-06T22:42:52.131942+0000 mgr.vm02.opvwec (mgr.14199) 210 : audit [DBG] from='client.14506 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:42:53.990 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:53 vm02 bash[17013]: audit 2026-03-06T22:42:52.131942+0000 mgr.vm02.opvwec (mgr.14199) 210 : audit [DBG] from='client.14506 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:42:54.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:54 vm07 bash[20848]: cluster 2026-03-06T22:42:52.905679+0000 mgr.vm02.opvwec (mgr.14199) 211 : cluster [DBG] pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:54.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:54 vm07 bash[20848]: cluster 2026-03-06T22:42:52.905679+0000 mgr.vm02.opvwec (mgr.14199) 211 : cluster [DBG] pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:54.990 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:54 vm02 bash[17013]: cluster 2026-03-06T22:42:52.905679+0000 mgr.vm02.opvwec (mgr.14199) 211 : cluster [DBG] pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:54.990 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:54 vm02 bash[17013]: cluster 2026-03-06T22:42:52.905679+0000 mgr.vm02.opvwec (mgr.14199) 211 : cluster [DBG] pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:56.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:56 vm07 bash[20848]: cluster 2026-03-06T22:42:54.905966+0000 mgr.vm02.opvwec (mgr.14199) 212 : cluster [DBG] pgmap v136: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:56.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:56 vm07 bash[20848]: cluster 2026-03-06T22:42:54.905966+0000 mgr.vm02.opvwec (mgr.14199) 212 : cluster [DBG] pgmap v136: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:56.989 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:56 vm02 bash[17013]: cluster 2026-03-06T22:42:54.905966+0000 mgr.vm02.opvwec (mgr.14199) 212 : cluster [DBG] pgmap v136: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:56.990 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:56 vm02 bash[17013]: cluster 2026-03-06T22:42:54.905966+0000 mgr.vm02.opvwec (mgr.14199) 212 : cluster [DBG] pgmap v136: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:57.986 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:42:58.349 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:42:58.350 INFO:teuthology.orchestra.run.vm02.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-06T22:37:53.328214Z", "last_refresh": "2026-03-06T22:42:14.061298Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:38:58.838036Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-06T22:37:51.782353Z", "last_refresh": "2026-03-06T22:42:14.061907Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:38:59.749903Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-06T22:37:51.310617Z", "last_refresh": "2026-03-06T22:42:14.061695Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:41:58.959413Z service:elasticsearch [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "elasticsearch", "service_type": "elasticsearch", "status": {"created": "2026-03-06T22:41:51.116481Z", "last_refresh": "2026-03-06T22:42:14.885961Z", "ports": [9200], "running": 1, "size": 1}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-06T22:37:52.581776Z", "last_refresh": "2026-03-06T22:42:14.061225Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:41:58.188734Z service:jaeger-agent [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "jaeger-agent", "service_type": "jaeger-agent", "status": {"created": "2026-03-06T22:41:51.139431Z", "last_refresh": "2026-03-06T22:42:14.061846Z", "ports": [6799], "running": 2, "size": 2}}, {"events": ["2026-03-06T22:41:59.701265Z service:jaeger-collector [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "jaeger-collector", "service_type": "jaeger-collector", "status": {"created": "2026-03-06T22:41:51.121608Z", "last_refresh": "2026-03-06T22:42:14.061456Z", "ports": [14250], "running": 0, "size": 1}}, {"events": ["2026-03-06T22:42:00.496940Z service:jaeger-query [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "jaeger-query", "service_type": "jaeger-query", "status": {"created": "2026-03-06T22:41:51.130859Z", "last_refresh": "2026-03-06T22:42:14.886008Z", "ports": [16686], "running": 0, "size": 1}}, {"events": ["2026-03-06T22:39:01.334137Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-06T22:37:50.865444Z", "last_refresh": "2026-03-06T22:42:14.061784Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:02.384235Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm02:192.168.123.102=vm02", "vm07:192.168.123.107=vm07"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-06T22:38:51.745852Z", "last_refresh": "2026-03-06T22:42:14.061394Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:00.489648Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-06T22:37:52.956113Z", "last_refresh": "2026-03-06T22:42:14.061815Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:21.440144Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-06T22:39:21.434901Z", "last_refresh": "2026-03-06T22:42:14.060931Z", "running": 8, "size": 8}}, {"events": ["2026-03-06T22:39:02.388064Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-06T22:37:52.205852Z", "last_refresh": "2026-03-06T22:42:14.061750Z", "ports": [9095], "running": 1, "size": 1}}] 2026-03-06T23:42:58.417 INFO:tasks.cephadm:jaeger-collector has 0/1 2026-03-06T23:42:58.740 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:58 vm02 bash[17013]: cluster 2026-03-06T22:42:56.906237+0000 mgr.vm02.opvwec (mgr.14199) 213 : cluster [DBG] pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:58.740 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:58 vm02 bash[17013]: cluster 2026-03-06T22:42:56.906237+0000 mgr.vm02.opvwec (mgr.14199) 213 : cluster [DBG] pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:58.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:58 vm07 bash[20848]: cluster 2026-03-06T22:42:56.906237+0000 mgr.vm02.opvwec (mgr.14199) 213 : cluster [DBG] pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:58.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:58 vm07 bash[20848]: cluster 2026-03-06T22:42:56.906237+0000 mgr.vm02.opvwec (mgr.14199) 213 : cluster [DBG] pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:42:59.417 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph orch ls -f json 2026-03-06T23:42:59.740 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:59 vm02 bash[17013]: audit 2026-03-06T22:42:58.344822+0000 mgr.vm02.opvwec (mgr.14199) 214 : audit [DBG] from='client.14510 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:42:59.740 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:42:59 vm02 bash[17013]: audit 2026-03-06T22:42:58.344822+0000 mgr.vm02.opvwec (mgr.14199) 214 : audit [DBG] from='client.14510 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:42:59.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:59 vm07 bash[20848]: audit 2026-03-06T22:42:58.344822+0000 mgr.vm02.opvwec (mgr.14199) 214 : audit [DBG] from='client.14510 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:42:59.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:42:59 vm07 bash[20848]: audit 2026-03-06T22:42:58.344822+0000 mgr.vm02.opvwec (mgr.14199) 214 : audit [DBG] from='client.14510 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:43:00.739 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:00 vm02 bash[17013]: cluster 2026-03-06T22:42:58.906503+0000 mgr.vm02.opvwec (mgr.14199) 215 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:00.739 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:00 vm02 bash[17013]: cluster 2026-03-06T22:42:58.906503+0000 mgr.vm02.opvwec (mgr.14199) 215 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:00.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:00 vm07 bash[20848]: cluster 2026-03-06T22:42:58.906503+0000 mgr.vm02.opvwec (mgr.14199) 215 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:00.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:00 vm07 bash[20848]: cluster 2026-03-06T22:42:58.906503+0000 mgr.vm02.opvwec (mgr.14199) 215 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:02.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:02 vm07 bash[20848]: cluster 2026-03-06T22:43:00.906752+0000 mgr.vm02.opvwec (mgr.14199) 216 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:02.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:02 vm07 bash[20848]: cluster 2026-03-06T22:43:00.906752+0000 mgr.vm02.opvwec (mgr.14199) 216 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:02.989 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:02 vm02 bash[17013]: cluster 2026-03-06T22:43:00.906752+0000 mgr.vm02.opvwec (mgr.14199) 216 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:02.989 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:02 vm02 bash[17013]: cluster 2026-03-06T22:43:00.906752+0000 mgr.vm02.opvwec (mgr.14199) 216 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:04.199 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:43:04.562 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:43:04.562 INFO:teuthology.orchestra.run.vm02.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-06T22:37:53.328214Z", "last_refresh": "2026-03-06T22:42:14.061298Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:38:58.838036Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-06T22:37:51.782353Z", "last_refresh": "2026-03-06T22:42:14.061907Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:38:59.749903Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-06T22:37:51.310617Z", "last_refresh": "2026-03-06T22:42:14.061695Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:41:58.959413Z service:elasticsearch [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "elasticsearch", "service_type": "elasticsearch", "status": {"created": "2026-03-06T22:41:51.116481Z", "last_refresh": "2026-03-06T22:42:14.885961Z", "ports": [9200], "running": 1, "size": 1}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-06T22:37:52.581776Z", "last_refresh": "2026-03-06T22:42:14.061225Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:41:58.188734Z service:jaeger-agent [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "jaeger-agent", "service_type": "jaeger-agent", "status": {"created": "2026-03-06T22:41:51.139431Z", "last_refresh": "2026-03-06T22:42:14.061846Z", "ports": [6799], "running": 2, "size": 2}}, {"events": ["2026-03-06T22:41:59.701265Z service:jaeger-collector [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "jaeger-collector", "service_type": "jaeger-collector", "status": {"created": "2026-03-06T22:41:51.121608Z", "last_refresh": "2026-03-06T22:42:14.061456Z", "ports": [14250], "running": 0, "size": 1}}, {"events": ["2026-03-06T22:42:00.496940Z service:jaeger-query [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "jaeger-query", "service_type": "jaeger-query", "status": {"created": "2026-03-06T22:41:51.130859Z", "last_refresh": "2026-03-06T22:42:14.886008Z", "ports": [16686], "running": 0, "size": 1}}, {"events": ["2026-03-06T22:39:01.334137Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-06T22:37:50.865444Z", "last_refresh": "2026-03-06T22:42:14.061784Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:02.384235Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm02:192.168.123.102=vm02", "vm07:192.168.123.107=vm07"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-06T22:38:51.745852Z", "last_refresh": "2026-03-06T22:42:14.061394Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:00.489648Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-06T22:37:52.956113Z", "last_refresh": "2026-03-06T22:42:14.061815Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:21.440144Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-06T22:39:21.434901Z", "last_refresh": "2026-03-06T22:42:14.060931Z", "running": 8, "size": 8}}, {"events": ["2026-03-06T22:39:02.388064Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-06T22:37:52.205852Z", "last_refresh": "2026-03-06T22:42:14.061750Z", "ports": [9095], "running": 1, "size": 1}}] 2026-03-06T23:43:04.622 INFO:tasks.cephadm:jaeger-collector has 0/1 2026-03-06T23:43:04.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:04 vm07 bash[20848]: cluster 2026-03-06T22:43:02.907034+0000 mgr.vm02.opvwec (mgr.14199) 217 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:04.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:04 vm07 bash[20848]: cluster 2026-03-06T22:43:02.907034+0000 mgr.vm02.opvwec (mgr.14199) 217 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:04.989 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:04 vm02 bash[17013]: cluster 2026-03-06T22:43:02.907034+0000 mgr.vm02.opvwec (mgr.14199) 217 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:04.989 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:04 vm02 bash[17013]: cluster 2026-03-06T22:43:02.907034+0000 mgr.vm02.opvwec (mgr.14199) 217 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:05.623 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph orch ls -f json 2026-03-06T23:43:05.739 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:05 vm02 bash[17013]: audit 2026-03-06T22:43:04.558811+0000 mgr.vm02.opvwec (mgr.14199) 218 : audit [DBG] from='client.14514 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:43:05.739 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:05 vm02 bash[17013]: audit 2026-03-06T22:43:04.558811+0000 mgr.vm02.opvwec (mgr.14199) 218 : audit [DBG] from='client.14514 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:43:05.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:05 vm07 bash[20848]: audit 2026-03-06T22:43:04.558811+0000 mgr.vm02.opvwec (mgr.14199) 218 : audit [DBG] from='client.14514 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:43:05.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:05 vm07 bash[20848]: audit 2026-03-06T22:43:04.558811+0000 mgr.vm02.opvwec (mgr.14199) 218 : audit [DBG] from='client.14514 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:43:06.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:06 vm07 bash[20848]: cluster 2026-03-06T22:43:04.907298+0000 mgr.vm02.opvwec (mgr.14199) 219 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:06.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:06 vm07 bash[20848]: cluster 2026-03-06T22:43:04.907298+0000 mgr.vm02.opvwec (mgr.14199) 219 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:06.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:06 vm07 bash[20848]: audit 2026-03-06T22:43:05.802596+0000 mon.vm02 (mon.0) 701 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:43:06.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:06 vm07 bash[20848]: audit 2026-03-06T22:43:05.802596+0000 mon.vm02 (mon.0) 701 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:43:06.989 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:06 vm02 bash[17013]: cluster 2026-03-06T22:43:04.907298+0000 mgr.vm02.opvwec (mgr.14199) 219 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:06.989 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:06 vm02 bash[17013]: cluster 2026-03-06T22:43:04.907298+0000 mgr.vm02.opvwec (mgr.14199) 219 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:06.989 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:06 vm02 bash[17013]: audit 2026-03-06T22:43:05.802596+0000 mon.vm02 (mon.0) 701 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:43:06.989 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:06 vm02 bash[17013]: audit 2026-03-06T22:43:05.802596+0000 mon.vm02 (mon.0) 701 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:43:08.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:08 vm07 bash[20848]: cluster 2026-03-06T22:43:06.907530+0000 mgr.vm02.opvwec (mgr.14199) 220 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:08.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:08 vm07 bash[20848]: cluster 2026-03-06T22:43:06.907530+0000 mgr.vm02.opvwec (mgr.14199) 220 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:08.989 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:08 vm02 bash[17013]: cluster 2026-03-06T22:43:06.907530+0000 mgr.vm02.opvwec (mgr.14199) 220 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:08.989 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:08 vm02 bash[17013]: cluster 2026-03-06T22:43:06.907530+0000 mgr.vm02.opvwec (mgr.14199) 220 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:10.410 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:43:10.739 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:10 vm02 bash[17013]: cluster 2026-03-06T22:43:08.907824+0000 mgr.vm02.opvwec (mgr.14199) 221 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:10.739 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:10 vm02 bash[17013]: cluster 2026-03-06T22:43:08.907824+0000 mgr.vm02.opvwec (mgr.14199) 221 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:10.775 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:43:10.775 INFO:teuthology.orchestra.run.vm02.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-06T22:37:53.328214Z", "last_refresh": "2026-03-06T22:42:14.061298Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:38:58.838036Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-06T22:37:51.782353Z", "last_refresh": "2026-03-06T22:42:14.061907Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:38:59.749903Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-06T22:37:51.310617Z", "last_refresh": "2026-03-06T22:42:14.061695Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:41:58.959413Z service:elasticsearch [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "elasticsearch", "service_type": "elasticsearch", "status": {"created": "2026-03-06T22:41:51.116481Z", "last_refresh": "2026-03-06T22:42:14.885961Z", "ports": [9200], "running": 1, "size": 1}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-06T22:37:52.581776Z", "last_refresh": "2026-03-06T22:42:14.061225Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:41:58.188734Z service:jaeger-agent [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "jaeger-agent", "service_type": "jaeger-agent", "status": {"created": "2026-03-06T22:41:51.139431Z", "last_refresh": "2026-03-06T22:42:14.061846Z", "ports": [6799], "running": 2, "size": 2}}, {"events": ["2026-03-06T22:41:59.701265Z service:jaeger-collector [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "jaeger-collector", "service_type": "jaeger-collector", "status": {"created": "2026-03-06T22:41:51.121608Z", "last_refresh": "2026-03-06T22:42:14.061456Z", "ports": [14250], "running": 0, "size": 1}}, {"events": ["2026-03-06T22:42:00.496940Z service:jaeger-query [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "jaeger-query", "service_type": "jaeger-query", "status": {"created": "2026-03-06T22:41:51.130859Z", "last_refresh": "2026-03-06T22:42:14.886008Z", "ports": [16686], "running": 0, "size": 1}}, {"events": ["2026-03-06T22:39:01.334137Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-06T22:37:50.865444Z", "last_refresh": "2026-03-06T22:42:14.061784Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:02.384235Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm02:192.168.123.102=vm02", "vm07:192.168.123.107=vm07"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-06T22:38:51.745852Z", "last_refresh": "2026-03-06T22:42:14.061394Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:00.489648Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-06T22:37:52.956113Z", "last_refresh": "2026-03-06T22:42:14.061815Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:21.440144Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-06T22:39:21.434901Z", "last_refresh": "2026-03-06T22:42:14.060931Z", "running": 8, "size": 8}}, {"events": ["2026-03-06T22:39:02.388064Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-06T22:37:52.205852Z", "last_refresh": "2026-03-06T22:42:14.061750Z", "ports": [9095], "running": 1, "size": 1}}] 2026-03-06T23:43:10.836 INFO:tasks.cephadm:jaeger-collector has 0/1 2026-03-06T23:43:10.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:10 vm07 bash[20848]: cluster 2026-03-06T22:43:08.907824+0000 mgr.vm02.opvwec (mgr.14199) 221 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:10.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:10 vm07 bash[20848]: cluster 2026-03-06T22:43:08.907824+0000 mgr.vm02.opvwec (mgr.14199) 221 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:11.837 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph orch ls -f json 2026-03-06T23:43:12.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:12 vm07 bash[20848]: audit 2026-03-06T22:43:10.772216+0000 mgr.vm02.opvwec (mgr.14199) 222 : audit [DBG] from='client.14518 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:43:12.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:12 vm07 bash[20848]: audit 2026-03-06T22:43:10.772216+0000 mgr.vm02.opvwec (mgr.14199) 222 : audit [DBG] from='client.14518 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:43:12.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:12 vm07 bash[20848]: cluster 2026-03-06T22:43:10.908113+0000 mgr.vm02.opvwec (mgr.14199) 223 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:12.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:12 vm07 bash[20848]: cluster 2026-03-06T22:43:10.908113+0000 mgr.vm02.opvwec (mgr.14199) 223 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:12.989 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:12 vm02 bash[17013]: audit 2026-03-06T22:43:10.772216+0000 mgr.vm02.opvwec (mgr.14199) 222 : audit [DBG] from='client.14518 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:43:12.989 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:12 vm02 bash[17013]: audit 2026-03-06T22:43:10.772216+0000 mgr.vm02.opvwec (mgr.14199) 222 : audit [DBG] from='client.14518 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:43:12.989 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:12 vm02 bash[17013]: cluster 2026-03-06T22:43:10.908113+0000 mgr.vm02.opvwec (mgr.14199) 223 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:12.989 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:12 vm02 bash[17013]: cluster 2026-03-06T22:43:10.908113+0000 mgr.vm02.opvwec (mgr.14199) 223 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:14.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:14 vm07 bash[20848]: cluster 2026-03-06T22:43:12.908414+0000 mgr.vm02.opvwec (mgr.14199) 224 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:14.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:14 vm07 bash[20848]: cluster 2026-03-06T22:43:12.908414+0000 mgr.vm02.opvwec (mgr.14199) 224 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:14.989 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:14 vm02 bash[17013]: cluster 2026-03-06T22:43:12.908414+0000 mgr.vm02.opvwec (mgr.14199) 224 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:14.989 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:14 vm02 bash[17013]: cluster 2026-03-06T22:43:12.908414+0000 mgr.vm02.opvwec (mgr.14199) 224 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:15.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:15 vm07 bash[20848]: audit 2026-03-06T22:43:14.948519+0000 mon.vm02 (mon.0) 702 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:43:15.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:15 vm07 bash[20848]: audit 2026-03-06T22:43:14.948519+0000 mon.vm02 (mon.0) 702 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:43:15.989 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:15 vm02 bash[17013]: audit 2026-03-06T22:43:14.948519+0000 mon.vm02 (mon.0) 702 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:43:15.989 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:15 vm02 bash[17013]: audit 2026-03-06T22:43:14.948519+0000 mon.vm02 (mon.0) 702 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-06T23:43:16.629 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:43:16.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:16 vm07 bash[20848]: cluster 2026-03-06T22:43:14.908721+0000 mgr.vm02.opvwec (mgr.14199) 225 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:16.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:16 vm07 bash[20848]: cluster 2026-03-06T22:43:14.908721+0000 mgr.vm02.opvwec (mgr.14199) 225 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:16.989 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:16 vm02 bash[17013]: cluster 2026-03-06T22:43:14.908721+0000 mgr.vm02.opvwec (mgr.14199) 225 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:16.989 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:16 vm02 bash[17013]: cluster 2026-03-06T22:43:14.908721+0000 mgr.vm02.opvwec (mgr.14199) 225 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:16.998 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:43:16.998 INFO:teuthology.orchestra.run.vm02.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-06T22:37:53.328214Z", "last_refresh": "2026-03-06T22:42:14.061298Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:38:58.838036Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-06T22:37:51.782353Z", "last_refresh": "2026-03-06T22:42:14.061907Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:38:59.749903Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-06T22:37:51.310617Z", "last_refresh": "2026-03-06T22:42:14.061695Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:41:58.959413Z service:elasticsearch [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "elasticsearch", "service_type": "elasticsearch", "status": {"created": "2026-03-06T22:41:51.116481Z", "last_refresh": "2026-03-06T22:42:14.885961Z", "ports": [9200], "running": 1, "size": 1}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-06T22:37:52.581776Z", "last_refresh": "2026-03-06T22:42:14.061225Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:41:58.188734Z service:jaeger-agent [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "jaeger-agent", "service_type": "jaeger-agent", "status": {"created": "2026-03-06T22:41:51.139431Z", "last_refresh": "2026-03-06T22:42:14.061846Z", "ports": [6799], "running": 2, "size": 2}}, {"events": ["2026-03-06T22:41:59.701265Z service:jaeger-collector [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "jaeger-collector", "service_type": "jaeger-collector", "status": {"created": "2026-03-06T22:41:51.121608Z", "last_refresh": "2026-03-06T22:42:14.061456Z", "ports": [14250], "running": 0, "size": 1}}, {"events": ["2026-03-06T22:42:00.496940Z service:jaeger-query [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "jaeger-query", "service_type": "jaeger-query", "status": {"created": "2026-03-06T22:41:51.130859Z", "last_refresh": "2026-03-06T22:42:14.886008Z", "ports": [16686], "running": 0, "size": 1}}, {"events": ["2026-03-06T22:39:01.334137Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-06T22:37:50.865444Z", "last_refresh": "2026-03-06T22:42:14.061784Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:02.384235Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm02:192.168.123.102=vm02", "vm07:192.168.123.107=vm07"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-06T22:38:51.745852Z", "last_refresh": "2026-03-06T22:42:14.061394Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:00.489648Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-06T22:37:52.956113Z", "last_refresh": "2026-03-06T22:42:14.061815Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:21.440144Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-06T22:39:21.434901Z", "last_refresh": "2026-03-06T22:42:14.060931Z", "running": 8, "size": 8}}, {"events": ["2026-03-06T22:39:02.388064Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-06T22:37:52.205852Z", "last_refresh": "2026-03-06T22:42:14.061750Z", "ports": [9095], "running": 1, "size": 1}}] 2026-03-06T23:43:17.724 INFO:tasks.cephadm:jaeger-collector has 0/1 2026-03-06T23:43:18.724 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph orch ls -f json 2026-03-06T23:43:18.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:18 vm02 bash[17013]: cluster 2026-03-06T22:43:16.908988+0000 mgr.vm02.opvwec (mgr.14199) 226 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:18.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:18 vm02 bash[17013]: cluster 2026-03-06T22:43:16.908988+0000 mgr.vm02.opvwec (mgr.14199) 226 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:18.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:18 vm02 bash[17013]: audit 2026-03-06T22:43:16.995386+0000 mgr.vm02.opvwec (mgr.14199) 227 : audit [DBG] from='client.14522 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:43:18.993 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:18 vm02 bash[17013]: audit 2026-03-06T22:43:16.995386+0000 mgr.vm02.opvwec (mgr.14199) 227 : audit [DBG] from='client.14522 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:43:18.995 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:18 vm07 bash[20848]: cluster 2026-03-06T22:43:16.908988+0000 mgr.vm02.opvwec (mgr.14199) 226 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:18.995 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:18 vm07 bash[20848]: cluster 2026-03-06T22:43:16.908988+0000 mgr.vm02.opvwec (mgr.14199) 226 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:18.995 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:18 vm07 bash[20848]: audit 2026-03-06T22:43:16.995386+0000 mgr.vm02.opvwec (mgr.14199) 227 : audit [DBG] from='client.14522 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:43:18.995 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:18 vm07 bash[20848]: audit 2026-03-06T22:43:16.995386+0000 mgr.vm02.opvwec (mgr.14199) 227 : audit [DBG] from='client.14522 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:43:20.228 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:19 vm07 bash[20848]: cluster 2026-03-06T22:43:18.909239+0000 mgr.vm02.opvwec (mgr.14199) 228 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:20.228 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:19 vm07 bash[20848]: cluster 2026-03-06T22:43:18.909239+0000 mgr.vm02.opvwec (mgr.14199) 228 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:20.238 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:19 vm02 bash[17013]: cluster 2026-03-06T22:43:18.909239+0000 mgr.vm02.opvwec (mgr.14199) 228 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:20.238 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:19 vm02 bash[17013]: cluster 2026-03-06T22:43:18.909239+0000 mgr.vm02.opvwec (mgr.14199) 228 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:21.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:21 vm07 bash[20848]: audit 2026-03-06T22:43:20.107155+0000 mon.vm02 (mon.0) 703 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:43:21.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:21 vm07 bash[20848]: audit 2026-03-06T22:43:20.107155+0000 mon.vm02 (mon.0) 703 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:43:21.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:21 vm07 bash[20848]: audit 2026-03-06T22:43:20.113550+0000 mon.vm02 (mon.0) 704 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:43:21.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:21 vm07 bash[20848]: audit 2026-03-06T22:43:20.113550+0000 mon.vm02 (mon.0) 704 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:43:21.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:21 vm07 bash[20848]: audit 2026-03-06T22:43:20.802719+0000 mon.vm02 (mon.0) 705 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:43:21.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:21 vm07 bash[20848]: audit 2026-03-06T22:43:20.802719+0000 mon.vm02 (mon.0) 705 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:43:21.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:21 vm07 bash[20848]: audit 2026-03-06T22:43:20.916898+0000 mon.vm02 (mon.0) 706 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:43:21.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:21 vm07 bash[20848]: audit 2026-03-06T22:43:20.916898+0000 mon.vm02 (mon.0) 706 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:43:21.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:21 vm07 bash[20848]: audit 2026-03-06T22:43:20.922831+0000 mon.vm02 (mon.0) 707 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:43:21.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:21 vm07 bash[20848]: audit 2026-03-06T22:43:20.922831+0000 mon.vm02 (mon.0) 707 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:43:21.489 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:21 vm02 bash[17013]: audit 2026-03-06T22:43:20.107155+0000 mon.vm02 (mon.0) 703 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:43:21.489 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:21 vm02 bash[17013]: audit 2026-03-06T22:43:20.107155+0000 mon.vm02 (mon.0) 703 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:43:21.489 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:21 vm02 bash[17013]: audit 2026-03-06T22:43:20.113550+0000 mon.vm02 (mon.0) 704 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:43:21.489 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:21 vm02 bash[17013]: audit 2026-03-06T22:43:20.113550+0000 mon.vm02 (mon.0) 704 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:43:21.489 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:21 vm02 bash[17013]: audit 2026-03-06T22:43:20.802719+0000 mon.vm02 (mon.0) 705 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:43:21.489 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:21 vm02 bash[17013]: audit 2026-03-06T22:43:20.802719+0000 mon.vm02 (mon.0) 705 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:43:21.489 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:21 vm02 bash[17013]: audit 2026-03-06T22:43:20.916898+0000 mon.vm02 (mon.0) 706 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:43:21.489 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:21 vm02 bash[17013]: audit 2026-03-06T22:43:20.916898+0000 mon.vm02 (mon.0) 706 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:43:21.489 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:21 vm02 bash[17013]: audit 2026-03-06T22:43:20.922831+0000 mon.vm02 (mon.0) 707 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:43:21.489 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:21 vm02 bash[17013]: audit 2026-03-06T22:43:20.922831+0000 mon.vm02 (mon.0) 707 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:43:22.478 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:22 vm07 bash[20848]: cluster 2026-03-06T22:43:20.909482+0000 mgr.vm02.opvwec (mgr.14199) 229 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:22.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:22 vm07 bash[20848]: cluster 2026-03-06T22:43:20.909482+0000 mgr.vm02.opvwec (mgr.14199) 229 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:22.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:22 vm07 bash[20848]: audit 2026-03-06T22:43:21.251597+0000 mon.vm02 (mon.0) 708 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:43:22.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:22 vm07 bash[20848]: audit 2026-03-06T22:43:21.251597+0000 mon.vm02 (mon.0) 708 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:43:22.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:22 vm07 bash[20848]: audit 2026-03-06T22:43:21.252340+0000 mon.vm02 (mon.0) 709 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T23:43:22.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:22 vm07 bash[20848]: audit 2026-03-06T22:43:21.252340+0000 mon.vm02 (mon.0) 709 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T23:43:22.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:22 vm07 bash[20848]: cluster 2026-03-06T22:43:21.253364+0000 mgr.vm02.opvwec (mgr.14199) 230 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:22.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:22 vm07 bash[20848]: cluster 2026-03-06T22:43:21.253364+0000 mgr.vm02.opvwec (mgr.14199) 230 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:22.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:22 vm07 bash[20848]: audit 2026-03-06T22:43:21.258349+0000 mon.vm02 (mon.0) 710 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:43:22.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:22 vm07 bash[20848]: audit 2026-03-06T22:43:21.258349+0000 mon.vm02 (mon.0) 710 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:43:22.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:22 vm07 bash[20848]: audit 2026-03-06T22:43:21.260666+0000 mon.vm02 (mon.0) 711 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-06T23:43:22.479 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:22 vm07 bash[20848]: audit 2026-03-06T22:43:21.260666+0000 mon.vm02 (mon.0) 711 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-06T23:43:22.489 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:22 vm02 bash[17013]: cluster 2026-03-06T22:43:20.909482+0000 mgr.vm02.opvwec (mgr.14199) 229 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:22.489 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:22 vm02 bash[17013]: cluster 2026-03-06T22:43:20.909482+0000 mgr.vm02.opvwec (mgr.14199) 229 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:22.489 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:22 vm02 bash[17013]: audit 2026-03-06T22:43:21.251597+0000 mon.vm02 (mon.0) 708 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:43:22.489 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:22 vm02 bash[17013]: audit 2026-03-06T22:43:21.251597+0000 mon.vm02 (mon.0) 708 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-06T23:43:22.489 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:22 vm02 bash[17013]: audit 2026-03-06T22:43:21.252340+0000 mon.vm02 (mon.0) 709 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T23:43:22.489 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:22 vm02 bash[17013]: audit 2026-03-06T22:43:21.252340+0000 mon.vm02 (mon.0) 709 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-06T23:43:22.489 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:22 vm02 bash[17013]: cluster 2026-03-06T22:43:21.253364+0000 mgr.vm02.opvwec (mgr.14199) 230 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:22.489 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:22 vm02 bash[17013]: cluster 2026-03-06T22:43:21.253364+0000 mgr.vm02.opvwec (mgr.14199) 230 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:22.489 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:22 vm02 bash[17013]: audit 2026-03-06T22:43:21.258349+0000 mon.vm02 (mon.0) 710 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:43:22.489 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:22 vm02 bash[17013]: audit 2026-03-06T22:43:21.258349+0000 mon.vm02 (mon.0) 710 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:43:22.489 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:22 vm02 bash[17013]: audit 2026-03-06T22:43:21.260666+0000 mon.vm02 (mon.0) 711 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-06T23:43:22.489 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:22 vm02 bash[17013]: audit 2026-03-06T22:43:21.260666+0000 mon.vm02 (mon.0) 711 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-06T23:43:23.267 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:43:23.706 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:43:23.711 INFO:teuthology.orchestra.run.vm02.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-06T22:37:53.328214Z", "last_refresh": "2026-03-06T22:43:20.910989Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:38:58.838036Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-06T22:37:51.782353Z", "last_refresh": "2026-03-06T22:43:20.099120Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:38:59.749903Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-06T22:37:51.310617Z", "last_refresh": "2026-03-06T22:43:20.099093Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:41:58.959413Z service:elasticsearch [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "elasticsearch", "service_type": "elasticsearch", "status": {"created": "2026-03-06T22:41:51.116481Z", "last_refresh": "2026-03-06T22:43:20.098953Z", "ports": [9200], "running": 1, "size": 1}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-06T22:37:52.581776Z", "last_refresh": "2026-03-06T22:43:20.910939Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:41:58.188734Z service:jaeger-agent [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "jaeger-agent", "service_type": "jaeger-agent", "status": {"created": "2026-03-06T22:41:51.139431Z", "last_refresh": "2026-03-06T22:43:20.098774Z", "ports": [6799], "running": 2, "size": 2}}, {"events": ["2026-03-06T22:41:59.701265Z service:jaeger-collector [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "jaeger-collector", "service_type": "jaeger-collector", "status": {"created": "2026-03-06T22:41:51.121608Z", "last_refresh": "2026-03-06T22:43:20.911057Z", "ports": [14250], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:42:00.496940Z service:jaeger-query [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "jaeger-query", "service_type": "jaeger-query", "status": {"created": "2026-03-06T22:41:51.130859Z", "last_refresh": "2026-03-06T22:43:20.099010Z", "ports": [16686], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:39:01.334137Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-06T22:37:50.865444Z", "last_refresh": "2026-03-06T22:43:20.099038Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:02.384235Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm02:192.168.123.102=vm02", "vm07:192.168.123.107=vm07"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-06T22:38:51.745852Z", "last_refresh": "2026-03-06T22:43:20.098824Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:00.489648Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-06T22:37:52.956113Z", "last_refresh": "2026-03-06T22:43:20.098982Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:21.440144Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-06T22:39:21.434901Z", "last_refresh": "2026-03-06T22:43:20.098855Z", "running": 8, "size": 8}}, {"events": ["2026-03-06T22:39:02.388064Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-06T22:37:52.205852Z", "last_refresh": "2026-03-06T22:43:20.911118Z", "ports": [9095], "running": 1, "size": 1}}] 2026-03-06T23:43:23.724 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:23 vm02 bash[17013]: cluster 2026-03-06T22:43:22.123510+0000 mon.vm02 (mon.0) 712 : cluster [INF] Health check cleared: CEPHADM_FAILED_DAEMON (was: 2 failed cephadm daemon(s)) 2026-03-06T23:43:23.724 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:23 vm02 bash[17013]: cluster 2026-03-06T22:43:22.123510+0000 mon.vm02 (mon.0) 712 : cluster [INF] Health check cleared: CEPHADM_FAILED_DAEMON (was: 2 failed cephadm daemon(s)) 2026-03-06T23:43:23.724 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:23 vm02 bash[17013]: cluster 2026-03-06T22:43:22.123539+0000 mon.vm02 (mon.0) 713 : cluster [INF] Cluster is now healthy 2026-03-06T23:43:23.724 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:23 vm02 bash[17013]: cluster 2026-03-06T22:43:22.123539+0000 mon.vm02 (mon.0) 713 : cluster [INF] Cluster is now healthy 2026-03-06T23:43:23.729 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:23 vm07 bash[20848]: cluster 2026-03-06T22:43:22.123510+0000 mon.vm02 (mon.0) 712 : cluster [INF] Health check cleared: CEPHADM_FAILED_DAEMON (was: 2 failed cephadm daemon(s)) 2026-03-06T23:43:23.730 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:23 vm07 bash[20848]: cluster 2026-03-06T22:43:22.123510+0000 mon.vm02 (mon.0) 712 : cluster [INF] Health check cleared: CEPHADM_FAILED_DAEMON (was: 2 failed cephadm daemon(s)) 2026-03-06T23:43:23.730 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:23 vm07 bash[20848]: cluster 2026-03-06T22:43:22.123539+0000 mon.vm02 (mon.0) 713 : cluster [INF] Cluster is now healthy 2026-03-06T23:43:23.730 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:23 vm07 bash[20848]: cluster 2026-03-06T22:43:22.123539+0000 mon.vm02 (mon.0) 713 : cluster [INF] Cluster is now healthy 2026-03-06T23:43:24.009 INFO:tasks.cephadm:jaeger-collector has 1/1 2026-03-06T23:43:24.009 INFO:teuthology.run_tasks:Running task cephadm.wait_for_service... 2026-03-06T23:43:24.012 INFO:tasks.cephadm:Waiting for ceph service jaeger-query to start (timeout 300)... 2026-03-06T23:43:24.012 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph orch ls -f json 2026-03-06T23:43:24.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:24 vm07 bash[20848]: cluster 2026-03-06T22:43:23.253603+0000 mgr.vm02.opvwec (mgr.14199) 231 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:24.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:24 vm07 bash[20848]: cluster 2026-03-06T22:43:23.253603+0000 mgr.vm02.opvwec (mgr.14199) 231 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:24.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:24 vm07 bash[20848]: audit 2026-03-06T22:43:23.701709+0000 mgr.vm02.opvwec (mgr.14199) 232 : audit [DBG] from='client.14526 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:43:24.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:24 vm07 bash[20848]: audit 2026-03-06T22:43:23.701709+0000 mgr.vm02.opvwec (mgr.14199) 232 : audit [DBG] from='client.14526 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:43:24.738 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:24 vm02 bash[17013]: cluster 2026-03-06T22:43:23.253603+0000 mgr.vm02.opvwec (mgr.14199) 231 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:24.738 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:24 vm02 bash[17013]: cluster 2026-03-06T22:43:23.253603+0000 mgr.vm02.opvwec (mgr.14199) 231 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:24.738 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:24 vm02 bash[17013]: audit 2026-03-06T22:43:23.701709+0000 mgr.vm02.opvwec (mgr.14199) 232 : audit [DBG] from='client.14526 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:43:24.738 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:24 vm02 bash[17013]: audit 2026-03-06T22:43:23.701709+0000 mgr.vm02.opvwec (mgr.14199) 232 : audit [DBG] from='client.14526 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:43:26.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:26 vm07 bash[20848]: cluster 2026-03-06T22:43:25.253915+0000 mgr.vm02.opvwec (mgr.14199) 233 : cluster [DBG] pgmap v152: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:26.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:26 vm07 bash[20848]: cluster 2026-03-06T22:43:25.253915+0000 mgr.vm02.opvwec (mgr.14199) 233 : cluster [DBG] pgmap v152: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:26.738 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:26 vm02 bash[17013]: cluster 2026-03-06T22:43:25.253915+0000 mgr.vm02.opvwec (mgr.14199) 233 : cluster [DBG] pgmap v152: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:26.738 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:26 vm02 bash[17013]: cluster 2026-03-06T22:43:25.253915+0000 mgr.vm02.opvwec (mgr.14199) 233 : cluster [DBG] pgmap v152: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:28.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:28 vm07 bash[20848]: cluster 2026-03-06T22:43:27.254211+0000 mgr.vm02.opvwec (mgr.14199) 234 : cluster [DBG] pgmap v153: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:28.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:28 vm07 bash[20848]: cluster 2026-03-06T22:43:27.254211+0000 mgr.vm02.opvwec (mgr.14199) 234 : cluster [DBG] pgmap v153: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:28.738 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:28 vm02 bash[17013]: cluster 2026-03-06T22:43:27.254211+0000 mgr.vm02.opvwec (mgr.14199) 234 : cluster [DBG] pgmap v153: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:28.738 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:28 vm02 bash[17013]: cluster 2026-03-06T22:43:27.254211+0000 mgr.vm02.opvwec (mgr.14199) 234 : cluster [DBG] pgmap v153: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:28.828 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:43:29.211 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:43:29.211 INFO:teuthology.orchestra.run.vm02.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-06T22:37:53.328214Z", "last_refresh": "2026-03-06T22:43:20.910989Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:38:58.838036Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-06T22:37:51.782353Z", "last_refresh": "2026-03-06T22:43:20.099120Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:38:59.749903Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-06T22:37:51.310617Z", "last_refresh": "2026-03-06T22:43:20.099093Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:41:58.959413Z service:elasticsearch [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "elasticsearch", "service_type": "elasticsearch", "status": {"created": "2026-03-06T22:41:51.116481Z", "last_refresh": "2026-03-06T22:43:20.098953Z", "ports": [9200], "running": 1, "size": 1}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-06T22:37:52.581776Z", "last_refresh": "2026-03-06T22:43:20.910939Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:41:58.188734Z service:jaeger-agent [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "jaeger-agent", "service_type": "jaeger-agent", "status": {"created": "2026-03-06T22:41:51.139431Z", "last_refresh": "2026-03-06T22:43:20.098774Z", "ports": [6799], "running": 2, "size": 2}}, {"events": ["2026-03-06T22:41:59.701265Z service:jaeger-collector [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "jaeger-collector", "service_type": "jaeger-collector", "status": {"created": "2026-03-06T22:41:51.121608Z", "last_refresh": "2026-03-06T22:43:20.911057Z", "ports": [14250], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:42:00.496940Z service:jaeger-query [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "jaeger-query", "service_type": "jaeger-query", "status": {"created": "2026-03-06T22:41:51.130859Z", "last_refresh": "2026-03-06T22:43:20.099010Z", "ports": [16686], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:39:01.334137Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-06T22:37:50.865444Z", "last_refresh": "2026-03-06T22:43:20.099038Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:02.384235Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm02:192.168.123.102=vm02", "vm07:192.168.123.107=vm07"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-06T22:38:51.745852Z", "last_refresh": "2026-03-06T22:43:20.098824Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:00.489648Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-06T22:37:52.956113Z", "last_refresh": "2026-03-06T22:43:20.098982Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:21.440144Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-06T22:39:21.434901Z", "last_refresh": "2026-03-06T22:43:20.098855Z", "running": 8, "size": 8}}, {"events": ["2026-03-06T22:39:02.388064Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-06T22:37:52.205852Z", "last_refresh": "2026-03-06T22:43:20.911118Z", "ports": [9095], "running": 1, "size": 1}}] 2026-03-06T23:43:29.271 INFO:tasks.cephadm:jaeger-query has 1/1 2026-03-06T23:43:29.271 INFO:teuthology.run_tasks:Running task cephadm.wait_for_service... 2026-03-06T23:43:29.273 INFO:tasks.cephadm:Waiting for ceph service jaeger-agent to start (timeout 300)... 2026-03-06T23:43:29.273 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- ceph orch ls -f json 2026-03-06T23:43:30.722 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:30 vm02 bash[17013]: audit 2026-03-06T22:43:29.207880+0000 mgr.vm02.opvwec (mgr.14199) 235 : audit [DBG] from='client.14530 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:43:30.723 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:30 vm02 bash[17013]: audit 2026-03-06T22:43:29.207880+0000 mgr.vm02.opvwec (mgr.14199) 235 : audit [DBG] from='client.14530 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:43:30.723 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:30 vm02 bash[17013]: cluster 2026-03-06T22:43:29.254468+0000 mgr.vm02.opvwec (mgr.14199) 236 : cluster [DBG] pgmap v154: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:30.723 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:30 vm02 bash[17013]: cluster 2026-03-06T22:43:29.254468+0000 mgr.vm02.opvwec (mgr.14199) 236 : cluster [DBG] pgmap v154: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:30.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:30 vm07 bash[20848]: audit 2026-03-06T22:43:29.207880+0000 mgr.vm02.opvwec (mgr.14199) 235 : audit [DBG] from='client.14530 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:43:30.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:30 vm07 bash[20848]: audit 2026-03-06T22:43:29.207880+0000 mgr.vm02.opvwec (mgr.14199) 235 : audit [DBG] from='client.14530 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:43:30.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:30 vm07 bash[20848]: cluster 2026-03-06T22:43:29.254468+0000 mgr.vm02.opvwec (mgr.14199) 236 : cluster [DBG] pgmap v154: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:30.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:30 vm07 bash[20848]: cluster 2026-03-06T22:43:29.254468+0000 mgr.vm02.opvwec (mgr.14199) 236 : cluster [DBG] pgmap v154: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:32.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:32 vm07 bash[20848]: cluster 2026-03-06T22:43:31.254745+0000 mgr.vm02.opvwec (mgr.14199) 237 : cluster [DBG] pgmap v155: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:32.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:32 vm07 bash[20848]: cluster 2026-03-06T22:43:31.254745+0000 mgr.vm02.opvwec (mgr.14199) 237 : cluster [DBG] pgmap v155: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:32.738 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:32 vm02 bash[17013]: cluster 2026-03-06T22:43:31.254745+0000 mgr.vm02.opvwec (mgr.14199) 237 : cluster [DBG] pgmap v155: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:32.738 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:32 vm02 bash[17013]: cluster 2026-03-06T22:43:31.254745+0000 mgr.vm02.opvwec (mgr.14199) 237 : cluster [DBG] pgmap v155: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:34.067 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:43:34.426 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-06T23:43:34.426 INFO:teuthology.orchestra.run.vm02.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-06T22:37:53.328214Z", "last_refresh": "2026-03-06T22:43:20.910989Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:38:58.838036Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-06T22:37:51.782353Z", "last_refresh": "2026-03-06T22:43:20.099120Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:38:59.749903Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-06T22:37:51.310617Z", "last_refresh": "2026-03-06T22:43:20.099093Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:41:58.959413Z service:elasticsearch [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "elasticsearch", "service_type": "elasticsearch", "status": {"created": "2026-03-06T22:41:51.116481Z", "last_refresh": "2026-03-06T22:43:20.098953Z", "ports": [9200], "running": 1, "size": 1}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-06T22:37:52.581776Z", "last_refresh": "2026-03-06T22:43:20.910939Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:41:58.188734Z service:jaeger-agent [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "jaeger-agent", "service_type": "jaeger-agent", "status": {"created": "2026-03-06T22:41:51.139431Z", "last_refresh": "2026-03-06T22:43:20.098774Z", "ports": [6799], "running": 2, "size": 2}}, {"events": ["2026-03-06T22:41:59.701265Z service:jaeger-collector [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "jaeger-collector", "service_type": "jaeger-collector", "status": {"created": "2026-03-06T22:41:51.121608Z", "last_refresh": "2026-03-06T22:43:20.911057Z", "ports": [14250], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:42:00.496940Z service:jaeger-query [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "jaeger-query", "service_type": "jaeger-query", "status": {"created": "2026-03-06T22:41:51.130859Z", "last_refresh": "2026-03-06T22:43:20.099010Z", "ports": [16686], "running": 1, "size": 1}}, {"events": ["2026-03-06T22:39:01.334137Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-06T22:37:50.865444Z", "last_refresh": "2026-03-06T22:43:20.099038Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:02.384235Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm02:192.168.123.102=vm02", "vm07:192.168.123.107=vm07"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-06T22:38:51.745852Z", "last_refresh": "2026-03-06T22:43:20.098824Z", "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:00.489648Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-06T22:37:52.956113Z", "last_refresh": "2026-03-06T22:43:20.098982Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-06T22:39:21.440144Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-06T22:39:21.434901Z", "last_refresh": "2026-03-06T22:43:20.098855Z", "running": 8, "size": 8}}, {"events": ["2026-03-06T22:39:02.388064Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-06T22:37:52.205852Z", "last_refresh": "2026-03-06T22:43:20.911118Z", "ports": [9095], "running": 1, "size": 1}}] 2026-03-06T23:43:34.499 INFO:tasks.cephadm:jaeger-agent has 2/2 2026-03-06T23:43:34.499 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-06T23:43:34.501 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm02.local 2026-03-06T23:43:34.501 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- bash -c 'stat -c '"'"'%u %g'"'"' /var/log/ceph | grep '"'"'167 167'"'"'' 2026-03-06T23:43:34.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:34 vm07 bash[20848]: cluster 2026-03-06T22:43:33.255068+0000 mgr.vm02.opvwec (mgr.14199) 238 : cluster [DBG] pgmap v156: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:34.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:34 vm07 bash[20848]: cluster 2026-03-06T22:43:33.255068+0000 mgr.vm02.opvwec (mgr.14199) 238 : cluster [DBG] pgmap v156: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:34.738 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:34 vm02 bash[17013]: cluster 2026-03-06T22:43:33.255068+0000 mgr.vm02.opvwec (mgr.14199) 238 : cluster [DBG] pgmap v156: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:34.738 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:34 vm02 bash[17013]: cluster 2026-03-06T22:43:33.255068+0000 mgr.vm02.opvwec (mgr.14199) 238 : cluster [DBG] pgmap v156: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:35.444 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:35 vm02 bash[17013]: audit 2026-03-06T22:43:34.423514+0000 mgr.vm02.opvwec (mgr.14199) 239 : audit [DBG] from='client.14534 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:43:35.445 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:35 vm02 bash[17013]: audit 2026-03-06T22:43:34.423514+0000 mgr.vm02.opvwec (mgr.14199) 239 : audit [DBG] from='client.14534 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:43:35.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:35 vm07 bash[20848]: audit 2026-03-06T22:43:34.423514+0000 mgr.vm02.opvwec (mgr.14199) 239 : audit [DBG] from='client.14534 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:43:35.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:35 vm07 bash[20848]: audit 2026-03-06T22:43:34.423514+0000 mgr.vm02.opvwec (mgr.14199) 239 : audit [DBG] from='client.14534 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-06T23:43:36.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:36 vm07 bash[20848]: cluster 2026-03-06T22:43:35.255324+0000 mgr.vm02.opvwec (mgr.14199) 240 : cluster [DBG] pgmap v157: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:36.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:36 vm07 bash[20848]: cluster 2026-03-06T22:43:35.255324+0000 mgr.vm02.opvwec (mgr.14199) 240 : cluster [DBG] pgmap v157: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:36.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:36 vm07 bash[20848]: audit 2026-03-06T22:43:35.809927+0000 mon.vm02 (mon.0) 714 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:43:36.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:36 vm07 bash[20848]: audit 2026-03-06T22:43:35.809927+0000 mon.vm02 (mon.0) 714 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:43:36.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:36 vm07 bash[20848]: audit 2026-03-06T22:43:35.813564+0000 mon.vm02 (mon.0) 715 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:43:36.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:36 vm07 bash[20848]: audit 2026-03-06T22:43:35.813564+0000 mon.vm02 (mon.0) 715 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:43:36.738 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:36 vm02 bash[17013]: cluster 2026-03-06T22:43:35.255324+0000 mgr.vm02.opvwec (mgr.14199) 240 : cluster [DBG] pgmap v157: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:36.738 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:36 vm02 bash[17013]: cluster 2026-03-06T22:43:35.255324+0000 mgr.vm02.opvwec (mgr.14199) 240 : cluster [DBG] pgmap v157: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:36.738 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:36 vm02 bash[17013]: audit 2026-03-06T22:43:35.809927+0000 mon.vm02 (mon.0) 714 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:43:36.738 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:36 vm02 bash[17013]: audit 2026-03-06T22:43:35.809927+0000 mon.vm02 (mon.0) 714 : audit [INF] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' 2026-03-06T23:43:36.738 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:36 vm02 bash[17013]: audit 2026-03-06T22:43:35.813564+0000 mon.vm02 (mon.0) 715 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:43:36.738 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:36 vm02 bash[17013]: audit 2026-03-06T22:43:35.813564+0000 mon.vm02 (mon.0) 715 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:43:38.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:38 vm07 bash[20848]: cluster 2026-03-06T22:43:37.255528+0000 mgr.vm02.opvwec (mgr.14199) 241 : cluster [DBG] pgmap v158: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:38.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:38 vm07 bash[20848]: cluster 2026-03-06T22:43:37.255528+0000 mgr.vm02.opvwec (mgr.14199) 241 : cluster [DBG] pgmap v158: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:38.738 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:38 vm02 bash[17013]: cluster 2026-03-06T22:43:37.255528+0000 mgr.vm02.opvwec (mgr.14199) 241 : cluster [DBG] pgmap v158: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:38.738 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:38 vm02 bash[17013]: cluster 2026-03-06T22:43:37.255528+0000 mgr.vm02.opvwec (mgr.14199) 241 : cluster [DBG] pgmap v158: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:39.303 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:43:39.407 INFO:teuthology.orchestra.run.vm02.stdout:167 167 2026-03-06T23:43:39.457 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- bash -c 'ceph orch status' 2026-03-06T23:43:40.640 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:40 vm02 bash[17013]: cluster 2026-03-06T22:43:39.255817+0000 mgr.vm02.opvwec (mgr.14199) 242 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:40.641 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:40 vm02 bash[17013]: cluster 2026-03-06T22:43:39.255817+0000 mgr.vm02.opvwec (mgr.14199) 242 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:40.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:40 vm07 bash[20848]: cluster 2026-03-06T22:43:39.255817+0000 mgr.vm02.opvwec (mgr.14199) 242 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:40.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:40 vm07 bash[20848]: cluster 2026-03-06T22:43:39.255817+0000 mgr.vm02.opvwec (mgr.14199) 242 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:42.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:42 vm07 bash[20848]: cluster 2026-03-06T22:43:41.256117+0000 mgr.vm02.opvwec (mgr.14199) 243 : cluster [DBG] pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:42.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:42 vm07 bash[20848]: cluster 2026-03-06T22:43:41.256117+0000 mgr.vm02.opvwec (mgr.14199) 243 : cluster [DBG] pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:42.738 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:42 vm02 bash[17013]: cluster 2026-03-06T22:43:41.256117+0000 mgr.vm02.opvwec (mgr.14199) 243 : cluster [DBG] pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:42.738 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:42 vm02 bash[17013]: cluster 2026-03-06T22:43:41.256117+0000 mgr.vm02.opvwec (mgr.14199) 243 : cluster [DBG] pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:43.336 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:43:43.691 INFO:teuthology.orchestra.run.vm02.stdout:Backend: cephadm 2026-03-06T23:43:43.691 INFO:teuthology.orchestra.run.vm02.stdout:Available: Yes 2026-03-06T23:43:43.691 INFO:teuthology.orchestra.run.vm02.stdout:Paused: No 2026-03-06T23:43:43.753 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- bash -c 'ceph orch ps' 2026-03-06T23:43:44.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:44 vm07 bash[20848]: cluster 2026-03-06T22:43:43.256347+0000 mgr.vm02.opvwec (mgr.14199) 244 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:44.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:44 vm07 bash[20848]: cluster 2026-03-06T22:43:43.256347+0000 mgr.vm02.opvwec (mgr.14199) 244 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:44.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:44 vm07 bash[20848]: audit 2026-03-06T22:43:43.691007+0000 mgr.vm02.opvwec (mgr.14199) 245 : audit [DBG] from='client.14538 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:43:44.728 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:44 vm07 bash[20848]: audit 2026-03-06T22:43:43.691007+0000 mgr.vm02.opvwec (mgr.14199) 245 : audit [DBG] from='client.14538 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:43:44.738 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:44 vm02 bash[17013]: cluster 2026-03-06T22:43:43.256347+0000 mgr.vm02.opvwec (mgr.14199) 244 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:44.738 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:44 vm02 bash[17013]: cluster 2026-03-06T22:43:43.256347+0000 mgr.vm02.opvwec (mgr.14199) 244 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:44.738 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:44 vm02 bash[17013]: audit 2026-03-06T22:43:43.691007+0000 mgr.vm02.opvwec (mgr.14199) 245 : audit [DBG] from='client.14538 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:43:44.738 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:44 vm02 bash[17013]: audit 2026-03-06T22:43:43.691007+0000 mgr.vm02.opvwec (mgr.14199) 245 : audit [DBG] from='client.14538 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:43:46.738 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:46 vm02 bash[17013]: cluster 2026-03-06T22:43:45.256626+0000 mgr.vm02.opvwec (mgr.14199) 246 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:46.738 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:46 vm02 bash[17013]: cluster 2026-03-06T22:43:45.256626+0000 mgr.vm02.opvwec (mgr.14199) 246 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:46.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:46 vm07 bash[20848]: cluster 2026-03-06T22:43:45.256626+0000 mgr.vm02.opvwec (mgr.14199) 246 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:46.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:46 vm07 bash[20848]: cluster 2026-03-06T22:43:45.256626+0000 mgr.vm02.opvwec (mgr.14199) 246 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:48.548 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:43:48.738 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:48 vm02 bash[17013]: cluster 2026-03-06T22:43:47.256886+0000 mgr.vm02.opvwec (mgr.14199) 247 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:48.738 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:48 vm02 bash[17013]: cluster 2026-03-06T22:43:47.256886+0000 mgr.vm02.opvwec (mgr.14199) 247 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:48.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:48 vm07 bash[20848]: cluster 2026-03-06T22:43:47.256886+0000 mgr.vm02.opvwec (mgr.14199) 247 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:48.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:48 vm07 bash[20848]: cluster 2026-03-06T22:43:47.256886+0000 mgr.vm02.opvwec (mgr.14199) 247 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:49.141 INFO:teuthology.orchestra.run.vm02.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-06T23:43:49.141 INFO:teuthology.orchestra.run.vm02.stdout:alertmanager.vm02 vm02 *:9093,9094 running (4m) 28s ago 5m 13.7M - 0.25.0 c8568f914cd2 84648da077e5 2026-03-06T23:43:49.141 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm02 vm02 running (5m) 28s ago 5m 9083k - 19.2.3-39-g340d3c24fc6 8bccc98d839a f31669dc6a31 2026-03-06T23:43:49.141 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter.vm07 vm07 running (4m) 29s ago 4m 6108k - 19.2.3-39-g340d3c24fc6 8bccc98d839a 15fc838c5569 2026-03-06T23:43:49.141 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm02 vm02 running (5m) 28s ago 5m 10.7M - 19.2.3-39-g340d3c24fc6 8bccc98d839a 5a8fefd8c4c1 2026-03-06T23:43:49.141 INFO:teuthology.orchestra.run.vm02.stdout:crash.vm07 vm07 running (4m) 29s ago 4m 10.7M - 19.2.3-39-g340d3c24fc6 8bccc98d839a 1a8b1b8fe066 2026-03-06T23:43:49.141 INFO:teuthology.orchestra.run.vm02.stdout:elasticsearch.vm07 vm07 *:9200 running (97s) 29s ago 110s 1267M - 9a2652c5f453 45ede342ad31 2026-03-06T23:43:49.141 INFO:teuthology.orchestra.run.vm02.stdout:grafana.vm02 vm02 *:3000 running (4m) 28s ago 5m 65.8M - 10.4.0 c8b91775d855 b526f2d6f4e9 2026-03-06T23:43:49.141 INFO:teuthology.orchestra.run.vm02.stdout:jaeger-agent.vm02 vm02 *:6799 running (102s) 28s ago 110s 3619k - 9403e8d94e1c 185c8553fe19 2026-03-06T23:43:49.141 INFO:teuthology.orchestra.run.vm02.stdout:jaeger-agent.vm07 vm07 *:6799 running (101s) 29s ago 111s 3500k - 9403e8d94e1c 3096cfd04ae3 2026-03-06T23:43:49.141 INFO:teuthology.orchestra.run.vm02.stdout:jaeger-collector.vm02 vm02 *:14250 running (90s) 28s ago 109s 7780k - 2c18772d79b4 0a3dc09edf3d 2026-03-06T23:43:49.141 INFO:teuthology.orchestra.run.vm02.stdout:jaeger-query.vm07 vm07 *:16686 running (84s) 29s ago 108s 6444k - 87c4704a9650 4474e0025406 2026-03-06T23:43:49.141 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm02.opvwec vm02 *:9283,8765,8443 running (6m) 28s ago 6m 530M - 19.2.3-39-g340d3c24fc6 8bccc98d839a b47eb74d1963 2026-03-06T23:43:49.141 INFO:teuthology.orchestra.run.vm02.stdout:mgr.vm07.jbleen vm07 *:8443,9283,8765 running (4m) 29s ago 4m 472M - 19.2.3-39-g340d3c24fc6 8bccc98d839a de40c7b40128 2026-03-06T23:43:49.141 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm02 vm02 running (6m) 28s ago 6m 51.7M 2048M 19.2.3-39-g340d3c24fc6 8bccc98d839a c6a67e710759 2026-03-06T23:43:49.141 INFO:teuthology.orchestra.run.vm02.stdout:mon.vm07 vm07 running (4m) 29s ago 4m 43.9M 2048M 19.2.3-39-g340d3c24fc6 8bccc98d839a d04715d4bcf4 2026-03-06T23:43:49.141 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm02 vm02 *:9100 running (5m) 28s ago 5m 8223k - 1.7.0 72c9c2088986 8881e16001aa 2026-03-06T23:43:49.141 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.vm07 vm07 *:9100 running (4m) 29s ago 4m 7659k - 1.7.0 72c9c2088986 ebec8a2531f9 2026-03-06T23:43:49.141 INFO:teuthology.orchestra.run.vm02.stdout:osd.0 vm07 running (3m) 29s ago 3m 56.3M 4096M 19.2.3-39-g340d3c24fc6 8bccc98d839a fdee2703d717 2026-03-06T23:43:49.141 INFO:teuthology.orchestra.run.vm02.stdout:osd.1 vm02 running (3m) 28s ago 3m 57.3M 4096M 19.2.3-39-g340d3c24fc6 8bccc98d839a a3d88ef28af4 2026-03-06T23:43:49.141 INFO:teuthology.orchestra.run.vm02.stdout:osd.2 vm07 running (3m) 29s ago 3m 59.5M 4096M 19.2.3-39-g340d3c24fc6 8bccc98d839a 38338d41322a 2026-03-06T23:43:49.141 INFO:teuthology.orchestra.run.vm02.stdout:osd.3 vm02 running (3m) 28s ago 3m 37.8M 4096M 19.2.3-39-g340d3c24fc6 8bccc98d839a cf70c9c0e386 2026-03-06T23:43:49.141 INFO:teuthology.orchestra.run.vm02.stdout:osd.4 vm02 running (3m) 28s ago 3m 58.5M 4096M 19.2.3-39-g340d3c24fc6 8bccc98d839a bc8c49c1ec9a 2026-03-06T23:43:49.141 INFO:teuthology.orchestra.run.vm02.stdout:osd.5 vm07 running (3m) 29s ago 3m 34.9M 4096M 19.2.3-39-g340d3c24fc6 8bccc98d839a de138beeb700 2026-03-06T23:43:49.141 INFO:teuthology.orchestra.run.vm02.stdout:osd.6 vm07 running (3m) 29s ago 3m 37.8M 4096M 19.2.3-39-g340d3c24fc6 8bccc98d839a 4cc4e2c2032e 2026-03-06T23:43:49.141 INFO:teuthology.orchestra.run.vm02.stdout:osd.7 vm02 running (3m) 28s ago 3m 59.0M 4096M 19.2.3-39-g340d3c24fc6 8bccc98d839a 00712ed7c41f 2026-03-06T23:43:49.141 INFO:teuthology.orchestra.run.vm02.stdout:prometheus.vm02 vm02 *:9095 running (4m) 28s ago 5m 37.0M - 2.51.0 1d3b7f56885b 8e10e7c97737 2026-03-06T23:43:49.205 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- bash -c 'ceph orch ls' 2026-03-06T23:43:50.937 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:50 vm02 bash[17013]: audit 2026-03-06T22:43:49.136094+0000 mgr.vm02.opvwec (mgr.14199) 248 : audit [DBG] from='client.24327 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:43:50.937 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:50 vm02 bash[17013]: audit 2026-03-06T22:43:49.136094+0000 mgr.vm02.opvwec (mgr.14199) 248 : audit [DBG] from='client.24327 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:43:50.937 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:50 vm02 bash[17013]: cluster 2026-03-06T22:43:49.257127+0000 mgr.vm02.opvwec (mgr.14199) 249 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:50.937 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:50 vm02 bash[17013]: cluster 2026-03-06T22:43:49.257127+0000 mgr.vm02.opvwec (mgr.14199) 249 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:50.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:50 vm07 bash[20848]: audit 2026-03-06T22:43:49.136094+0000 mgr.vm02.opvwec (mgr.14199) 248 : audit [DBG] from='client.24327 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:43:50.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:50 vm07 bash[20848]: audit 2026-03-06T22:43:49.136094+0000 mgr.vm02.opvwec (mgr.14199) 248 : audit [DBG] from='client.24327 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:43:50.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:50 vm07 bash[20848]: cluster 2026-03-06T22:43:49.257127+0000 mgr.vm02.opvwec (mgr.14199) 249 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:50.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:50 vm07 bash[20848]: cluster 2026-03-06T22:43:49.257127+0000 mgr.vm02.opvwec (mgr.14199) 249 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:51.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:51 vm07 bash[20848]: audit 2026-03-06T22:43:50.803138+0000 mon.vm02 (mon.0) 716 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:43:51.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:51 vm07 bash[20848]: audit 2026-03-06T22:43:50.803138+0000 mon.vm02 (mon.0) 716 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:43:51.988 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:51 vm02 bash[17013]: audit 2026-03-06T22:43:50.803138+0000 mon.vm02 (mon.0) 716 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:43:51.988 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:51 vm02 bash[17013]: audit 2026-03-06T22:43:50.803138+0000 mon.vm02 (mon.0) 716 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:43:52.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:52 vm07 bash[20848]: cluster 2026-03-06T22:43:51.257386+0000 mgr.vm02.opvwec (mgr.14199) 250 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:52.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:52 vm07 bash[20848]: cluster 2026-03-06T22:43:51.257386+0000 mgr.vm02.opvwec (mgr.14199) 250 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:52.988 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:52 vm02 bash[17013]: cluster 2026-03-06T22:43:51.257386+0000 mgr.vm02.opvwec (mgr.14199) 250 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:52.988 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:52 vm02 bash[17013]: cluster 2026-03-06T22:43:51.257386+0000 mgr.vm02.opvwec (mgr.14199) 250 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:53.990 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:43:54.345 INFO:teuthology.orchestra.run.vm02.stdout:NAME PORTS RUNNING REFRESHED AGE PLACEMENT 2026-03-06T23:43:54.345 INFO:teuthology.orchestra.run.vm02.stdout:alertmanager ?:9093,9094 1/1 33s ago 6m count:1 2026-03-06T23:43:54.345 INFO:teuthology.orchestra.run.vm02.stdout:ceph-exporter 2/2 34s ago 6m * 2026-03-06T23:43:54.345 INFO:teuthology.orchestra.run.vm02.stdout:crash 2/2 34s ago 6m * 2026-03-06T23:43:54.345 INFO:teuthology.orchestra.run.vm02.stdout:elasticsearch ?:9200 1/1 34s ago 2m count:1 2026-03-06T23:43:54.345 INFO:teuthology.orchestra.run.vm02.stdout:grafana ?:3000 1/1 33s ago 6m count:1 2026-03-06T23:43:54.345 INFO:teuthology.orchestra.run.vm02.stdout:jaeger-agent ?:6799 2/2 34s ago 2m * 2026-03-06T23:43:54.345 INFO:teuthology.orchestra.run.vm02.stdout:jaeger-collector ?:14250 1/1 33s ago 2m count:1 2026-03-06T23:43:54.345 INFO:teuthology.orchestra.run.vm02.stdout:jaeger-query ?:16686 1/1 34s ago 2m count:1 2026-03-06T23:43:54.345 INFO:teuthology.orchestra.run.vm02.stdout:mgr 2/2 34s ago 6m count:2 2026-03-06T23:43:54.345 INFO:teuthology.orchestra.run.vm02.stdout:mon 2/2 34s ago 5m vm02:192.168.123.102=vm02;vm07:192.168.123.107=vm07;count:2 2026-03-06T23:43:54.345 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter ?:9100 2/2 34s ago 6m * 2026-03-06T23:43:54.345 INFO:teuthology.orchestra.run.vm02.stdout:osd.all-available-devices 8 34s ago 4m * 2026-03-06T23:43:54.345 INFO:teuthology.orchestra.run.vm02.stdout:prometheus ?:9095 1/1 33s ago 6m count:1 2026-03-06T23:43:54.404 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- bash -c 'ceph orch host ls' 2026-03-06T23:43:54.738 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:54 vm02 bash[17013]: cluster 2026-03-06T22:43:53.257602+0000 mgr.vm02.opvwec (mgr.14199) 251 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:54.738 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:54 vm02 bash[17013]: cluster 2026-03-06T22:43:53.257602+0000 mgr.vm02.opvwec (mgr.14199) 251 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:54.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:54 vm07 bash[20848]: cluster 2026-03-06T22:43:53.257602+0000 mgr.vm02.opvwec (mgr.14199) 251 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:54.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:54 vm07 bash[20848]: cluster 2026-03-06T22:43:53.257602+0000 mgr.vm02.opvwec (mgr.14199) 251 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:55.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:55 vm07 bash[20848]: audit 2026-03-06T22:43:54.342836+0000 mgr.vm02.opvwec (mgr.14199) 252 : audit [DBG] from='client.14546 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:43:55.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:55 vm07 bash[20848]: audit 2026-03-06T22:43:54.342836+0000 mgr.vm02.opvwec (mgr.14199) 252 : audit [DBG] from='client.14546 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:43:55.987 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:55 vm02 bash[17013]: audit 2026-03-06T22:43:54.342836+0000 mgr.vm02.opvwec (mgr.14199) 252 : audit [DBG] from='client.14546 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:43:55.988 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:55 vm02 bash[17013]: audit 2026-03-06T22:43:54.342836+0000 mgr.vm02.opvwec (mgr.14199) 252 : audit [DBG] from='client.14546 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:43:56.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:56 vm07 bash[20848]: cluster 2026-03-06T22:43:55.257922+0000 mgr.vm02.opvwec (mgr.14199) 253 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:56.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:56 vm07 bash[20848]: cluster 2026-03-06T22:43:55.257922+0000 mgr.vm02.opvwec (mgr.14199) 253 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:56.988 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:56 vm02 bash[17013]: cluster 2026-03-06T22:43:55.257922+0000 mgr.vm02.opvwec (mgr.14199) 253 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:56.988 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:56 vm02 bash[17013]: cluster 2026-03-06T22:43:55.257922+0000 mgr.vm02.opvwec (mgr.14199) 253 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:58.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:58 vm07 bash[20848]: cluster 2026-03-06T22:43:57.258221+0000 mgr.vm02.opvwec (mgr.14199) 254 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:58.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:43:58 vm07 bash[20848]: cluster 2026-03-06T22:43:57.258221+0000 mgr.vm02.opvwec (mgr.14199) 254 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:58.988 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:58 vm02 bash[17013]: cluster 2026-03-06T22:43:57.258221+0000 mgr.vm02.opvwec (mgr.14199) 254 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:58.988 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:43:58 vm02 bash[17013]: cluster 2026-03-06T22:43:57.258221+0000 mgr.vm02.opvwec (mgr.14199) 254 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:43:59.182 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:43:59.555 INFO:teuthology.orchestra.run.vm02.stdout:HOST ADDR LABELS STATUS 2026-03-06T23:43:59.555 INFO:teuthology.orchestra.run.vm02.stdout:vm02 192.168.123.102 2026-03-06T23:43:59.555 INFO:teuthology.orchestra.run.vm02.stdout:vm07 192.168.123.107 2026-03-06T23:43:59.556 INFO:teuthology.orchestra.run.vm02.stdout:2 hosts in cluster 2026-03-06T23:43:59.621 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- bash -c 'ceph orch device ls' 2026-03-06T23:44:00.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:44:00 vm07 bash[20848]: cluster 2026-03-06T22:43:59.258573+0000 mgr.vm02.opvwec (mgr.14199) 255 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:44:00.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:44:00 vm07 bash[20848]: cluster 2026-03-06T22:43:59.258573+0000 mgr.vm02.opvwec (mgr.14199) 255 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:44:00.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:44:00 vm07 bash[20848]: audit 2026-03-06T22:43:59.554870+0000 mgr.vm02.opvwec (mgr.14199) 256 : audit [DBG] from='client.14550 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:44:00.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:44:00 vm07 bash[20848]: audit 2026-03-06T22:43:59.554870+0000 mgr.vm02.opvwec (mgr.14199) 256 : audit [DBG] from='client.14550 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:44:00.988 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:44:00 vm02 bash[17013]: cluster 2026-03-06T22:43:59.258573+0000 mgr.vm02.opvwec (mgr.14199) 255 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:44:00.988 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:44:00 vm02 bash[17013]: cluster 2026-03-06T22:43:59.258573+0000 mgr.vm02.opvwec (mgr.14199) 255 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:44:00.988 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:44:00 vm02 bash[17013]: audit 2026-03-06T22:43:59.554870+0000 mgr.vm02.opvwec (mgr.14199) 256 : audit [DBG] from='client.14550 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:44:00.988 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:44:00 vm02 bash[17013]: audit 2026-03-06T22:43:59.554870+0000 mgr.vm02.opvwec (mgr.14199) 256 : audit [DBG] from='client.14550 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:44:02.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:44:02 vm07 bash[20848]: cluster 2026-03-06T22:44:01.258821+0000 mgr.vm02.opvwec (mgr.14199) 257 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:44:02.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:44:02 vm07 bash[20848]: cluster 2026-03-06T22:44:01.258821+0000 mgr.vm02.opvwec (mgr.14199) 257 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:44:02.988 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:44:02 vm02 bash[17013]: cluster 2026-03-06T22:44:01.258821+0000 mgr.vm02.opvwec (mgr.14199) 257 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:44:02.988 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:44:02 vm02 bash[17013]: cluster 2026-03-06T22:44:01.258821+0000 mgr.vm02.opvwec (mgr.14199) 257 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:44:04.416 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:44:04.738 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:44:04 vm02 bash[17013]: cluster 2026-03-06T22:44:03.259119+0000 mgr.vm02.opvwec (mgr.14199) 258 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:44:04.738 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:44:04 vm02 bash[17013]: cluster 2026-03-06T22:44:03.259119+0000 mgr.vm02.opvwec (mgr.14199) 258 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:44:04.794 INFO:teuthology.orchestra.run.vm02.stdout:HOST PATH TYPE DEVICE ID SIZE AVAILABLE REFRESHED REJECT REASONS 2026-03-06T23:44:04.794 INFO:teuthology.orchestra.run.vm02.stdout:vm02 /dev/sr0 hdd QEMU_DVD-ROM_QM00003 366k No 3m ago Has a FileSystem, Insufficient space (<5GB) 2026-03-06T23:44:04.794 INFO:teuthology.orchestra.run.vm02.stdout:vm02 /dev/vdb hdd DWNBRSTVMM02001 20.0G No 3m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-06T23:44:04.795 INFO:teuthology.orchestra.run.vm02.stdout:vm02 /dev/vdc hdd DWNBRSTVMM02002 20.0G No 3m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-06T23:44:04.795 INFO:teuthology.orchestra.run.vm02.stdout:vm02 /dev/vdd hdd DWNBRSTVMM02003 20.0G No 3m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-06T23:44:04.795 INFO:teuthology.orchestra.run.vm02.stdout:vm02 /dev/vde hdd DWNBRSTVMM02004 20.0G No 3m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-06T23:44:04.795 INFO:teuthology.orchestra.run.vm02.stdout:vm07 /dev/sr0 hdd QEMU_DVD-ROM_QM00003 366k No 3m ago Has a FileSystem, Insufficient space (<5GB) 2026-03-06T23:44:04.795 INFO:teuthology.orchestra.run.vm02.stdout:vm07 /dev/vdb hdd DWNBRSTVMM07001 20.0G No 3m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-06T23:44:04.795 INFO:teuthology.orchestra.run.vm02.stdout:vm07 /dev/vdc hdd DWNBRSTVMM07002 20.0G No 3m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-06T23:44:04.795 INFO:teuthology.orchestra.run.vm02.stdout:vm07 /dev/vdd hdd DWNBRSTVMM07003 20.0G No 3m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-06T23:44:04.795 INFO:teuthology.orchestra.run.vm02.stdout:vm07 /dev/vde hdd DWNBRSTVMM07004 20.0G No 3m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-06T23:44:04.867 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 -- bash -c 'ceph orch ls | grep '"'"'^osd.all-available-devices '"'"'' 2026-03-06T23:44:04.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:44:04 vm07 bash[20848]: cluster 2026-03-06T22:44:03.259119+0000 mgr.vm02.opvwec (mgr.14199) 258 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:44:04.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:44:04 vm07 bash[20848]: cluster 2026-03-06T22:44:03.259119+0000 mgr.vm02.opvwec (mgr.14199) 258 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:44:06.905 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:44:06 vm07 bash[20848]: audit 2026-03-06T22:44:04.792781+0000 mgr.vm02.opvwec (mgr.14199) 259 : audit [DBG] from='client.14554 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:44:06.905 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:44:06 vm07 bash[20848]: audit 2026-03-06T22:44:04.792781+0000 mgr.vm02.opvwec (mgr.14199) 259 : audit [DBG] from='client.14554 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:44:06.905 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:44:06 vm07 bash[20848]: cluster 2026-03-06T22:44:05.259441+0000 mgr.vm02.opvwec (mgr.14199) 260 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:44:06.905 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:44:06 vm07 bash[20848]: cluster 2026-03-06T22:44:05.259441+0000 mgr.vm02.opvwec (mgr.14199) 260 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:44:06.905 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:44:06 vm07 bash[20848]: audit 2026-03-06T22:44:05.803567+0000 mon.vm02 (mon.0) 717 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:44:06.905 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:44:06 vm07 bash[20848]: audit 2026-03-06T22:44:05.803567+0000 mon.vm02 (mon.0) 717 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:44:06.988 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:44:06 vm02 bash[17013]: audit 2026-03-06T22:44:04.792781+0000 mgr.vm02.opvwec (mgr.14199) 259 : audit [DBG] from='client.14554 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:44:06.988 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:44:06 vm02 bash[17013]: audit 2026-03-06T22:44:04.792781+0000 mgr.vm02.opvwec (mgr.14199) 259 : audit [DBG] from='client.14554 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-06T23:44:06.988 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:44:06 vm02 bash[17013]: cluster 2026-03-06T22:44:05.259441+0000 mgr.vm02.opvwec (mgr.14199) 260 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:44:06.988 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:44:06 vm02 bash[17013]: cluster 2026-03-06T22:44:05.259441+0000 mgr.vm02.opvwec (mgr.14199) 260 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:44:06.988 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:44:06 vm02 bash[17013]: audit 2026-03-06T22:44:05.803567+0000 mon.vm02 (mon.0) 717 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:44:06.988 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:44:06 vm02 bash[17013]: audit 2026-03-06T22:44:05.803567+0000 mon.vm02 (mon.0) 717 : audit [DBG] from='mgr.14199 192.168.123.102:0/4083294928' entity='mgr.vm02.opvwec' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-06T23:44:08.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:44:08 vm07 bash[20848]: cluster 2026-03-06T22:44:07.259769+0000 mgr.vm02.opvwec (mgr.14199) 261 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:44:08.978 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:44:08 vm07 bash[20848]: cluster 2026-03-06T22:44:07.259769+0000 mgr.vm02.opvwec (mgr.14199) 261 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:44:08.988 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:44:08 vm02 bash[17013]: cluster 2026-03-06T22:44:07.259769+0000 mgr.vm02.opvwec (mgr.14199) 261 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:44:08.988 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:44:08 vm02 bash[17013]: cluster 2026-03-06T22:44:07.259769+0000 mgr.vm02.opvwec (mgr.14199) 261 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-06T23:44:09.666 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/mon.vm02/config 2026-03-06T23:44:10.038 INFO:teuthology.orchestra.run.vm02.stdout:osd.all-available-devices 8 49s ago 4m * 2026-03-06T23:44:10.091 DEBUG:teuthology.run_tasks:Unwinding manager cephadm 2026-03-06T23:44:10.093 INFO:tasks.cephadm:Teardown begin 2026-03-06T23:44:10.093 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-06T23:44:10.101 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-06T23:44:10.120 INFO:tasks.cephadm:Cleaning up testdir ceph.* files... 2026-03-06T23:44:10.120 DEBUG:teuthology.orchestra.run.vm02:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-06T23:44:10.147 DEBUG:teuthology.orchestra.run.vm07:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-06T23:44:10.166 INFO:tasks.cephadm:Stopping all daemons... 2026-03-06T23:44:10.166 INFO:tasks.cephadm.mon.vm02:Stopping mon.vm02... 2026-03-06T23:44:10.166 DEBUG:teuthology.orchestra.run.vm02:> sudo systemctl stop ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@mon.vm02 2026-03-06T23:44:10.336 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:44:10 vm02 systemd[1]: Stopping Ceph mon.vm02 for f8b8c16a-19ac-11f1-87e7-9b7402b99c44... 2026-03-06T23:44:10.336 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:44:10 vm02 bash[17013]: debug 2026-03-06T22:44:10.232+0000 7f61e513f640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.vm02 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-06T23:44:10.336 INFO:journalctl@ceph.mon.vm02.vm02.stdout:Mar 06 23:44:10 vm02 bash[17013]: debug 2026-03-06T22:44:10.232+0000 7f61e513f640 -1 mon.vm02@0(leader) e2 *** Got Signal Terminated *** 2026-03-06T23:44:10.424 DEBUG:teuthology.orchestra.run.vm02:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@mon.vm02.service' 2026-03-06T23:44:10.450 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-06T23:44:10.450 INFO:tasks.cephadm.mon.vm02:Stopped mon.vm02 2026-03-06T23:44:10.450 INFO:tasks.cephadm.mon.vm07:Stopping mon.vm07... 2026-03-06T23:44:10.450 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@mon.vm07 2026-03-06T23:44:10.708 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:44:10 vm07 systemd[1]: Stopping Ceph mon.vm07 for f8b8c16a-19ac-11f1-87e7-9b7402b99c44... 2026-03-06T23:44:10.708 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:44:10 vm07 bash[20848]: debug 2026-03-06T22:44:10.509+0000 7fb260f76640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.vm07 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-06T23:44:10.708 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:44:10 vm07 bash[20848]: debug 2026-03-06T22:44:10.509+0000 7fb260f76640 -1 mon.vm07@1(peon) e2 *** Got Signal Terminated *** 2026-03-06T23:44:10.708 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:44:10 vm07 bash[38725]: ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44-mon-vm07 2026-03-06T23:44:10.708 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:44:10 vm07 systemd[1]: ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@mon.vm07.service: Deactivated successfully. 2026-03-06T23:44:10.708 INFO:journalctl@ceph.mon.vm07.vm07.stdout:Mar 06 23:44:10 vm07 systemd[1]: Stopped Ceph mon.vm07 for f8b8c16a-19ac-11f1-87e7-9b7402b99c44. 2026-03-06T23:44:10.708 DEBUG:teuthology.orchestra.run.vm07:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f8b8c16a-19ac-11f1-87e7-9b7402b99c44@mon.vm07.service' 2026-03-06T23:44:10.741 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-06T23:44:10.741 INFO:tasks.cephadm.mon.vm07:Stopped mon.vm07 2026-03-06T23:44:10.741 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 --force --keep-logs 2026-03-06T23:44:10.986 INFO:teuthology.orchestra.run.vm02.stdout:Deleting cluster with fsid: f8b8c16a-19ac-11f1-87e7-9b7402b99c44 2026-03-06T23:44:42.262 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 --force --keep-logs 2026-03-06T23:44:42.503 INFO:teuthology.orchestra.run.vm07.stdout:Deleting cluster with fsid: f8b8c16a-19ac-11f1-87e7-9b7402b99c44 2026-03-06T23:45:13.883 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-06T23:45:13.890 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-06T23:45:13.899 INFO:tasks.cephadm:Archiving crash dumps... 2026-03-06T23:45:13.899 DEBUG:teuthology.misc:Transferring archived files from vm02:/var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/crash to /archive/irq0-2026-03-06_20:21:59-orch:cephadm:smoke-roleless-cobaltcore-storage-v19.2.3-fasttrack-5-none-default-vps/412/remote/vm02/crash 2026-03-06T23:45:13.899 DEBUG:teuthology.orchestra.run.vm02:> sudo tar c -f - -C /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/crash -- . 2026-03-06T23:45:13.940 INFO:teuthology.orchestra.run.vm02.stderr:tar: /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/crash: Cannot open: No such file or directory 2026-03-06T23:45:13.940 INFO:teuthology.orchestra.run.vm02.stderr:tar: Error is not recoverable: exiting now 2026-03-06T23:45:13.940 DEBUG:teuthology.misc:Transferring archived files from vm07:/var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/crash to /archive/irq0-2026-03-06_20:21:59-orch:cephadm:smoke-roleless-cobaltcore-storage-v19.2.3-fasttrack-5-none-default-vps/412/remote/vm07/crash 2026-03-06T23:45:13.940 DEBUG:teuthology.orchestra.run.vm07:> sudo tar c -f - -C /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/crash -- . 2026-03-06T23:45:13.948 INFO:teuthology.orchestra.run.vm07.stderr:tar: /var/lib/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/crash: Cannot open: No such file or directory 2026-03-06T23:45:13.948 INFO:teuthology.orchestra.run.vm07.stderr:tar: Error is not recoverable: exiting now 2026-03-06T23:45:13.948 INFO:tasks.cephadm:Checking cluster log for badness... 2026-03-06T23:45:13.949 DEBUG:teuthology.orchestra.run.vm02:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph.log | egrep CEPHADM_ | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v CEPHADM_DAEMON_PLACE_FAIL | egrep -v CEPHADM_FAILED_DAEMON | head -n 1 2026-03-06T23:45:13.994 INFO:tasks.cephadm:Compressing logs... 2026-03-06T23:45:13.994 DEBUG:teuthology.orchestra.run.vm02:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-06T23:45:14.037 DEBUG:teuthology.orchestra.run.vm07:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-06T23:45:14.044 INFO:teuthology.orchestra.run.vm02.stderr:find: gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-06T23:45:14.044 INFO:teuthology.orchestra.run.vm02.stderr:‘/var/log/rbd-target-api’: No such file or directory 2026-03-06T23:45:14.045 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-client.ceph-exporter.vm02.log 2026-03-06T23:45:14.045 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-osd.3.log 2026-03-06T23:45:14.045 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/cephadm.log: /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-client.ceph-exporter.vm02.log: 94.1% -- replaced with /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-client.ceph-exporter.vm02.log.gz 2026-03-06T23:45:14.046 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph.log 2026-03-06T23:45:14.046 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-06T23:45:14.046 INFO:teuthology.orchestra.run.vm07.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-06T23:45:14.047 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-client.ceph-exporter.vm07.log 2026-03-06T23:45:14.048 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/cephadm.log: 89.9%gzip -5 --verbose -- /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph.log 2026-03-06T23:45:14.048 INFO:teuthology.orchestra.run.vm07.stderr: -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-06T23:45:14.048 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-osd.5.log 2026-03-06T23:45:14.048 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-client.ceph-exporter.vm07.log: /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph.log: 31.2% -- replaced with /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-client.ceph-exporter.vm07.log.gz 2026-03-06T23:45:14.049 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-osd.6.log 2026-03-06T23:45:14.049 INFO:teuthology.orchestra.run.vm07.stderr: 87.8% -- replaced with /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph.log.gz 2026-03-06T23:45:14.049 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-osd.5.log: gzip -5 --verbose -- /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-osd.2.log 2026-03-06T23:45:14.052 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-osd.3.log: gzip -5 --verbose -- /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-mon.vm02.log 2026-03-06T23:45:14.053 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph.log: 87.6% -- replaced with /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph.log.gz 2026-03-06T23:45:14.057 INFO:teuthology.orchestra.run.vm02.stderr: 91.6% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-06T23:45:14.057 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-mgr.vm02.opvwec.log 2026-03-06T23:45:14.060 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-osd.6.log: gzip -5 --verbose -- /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph.audit.log 2026-03-06T23:45:14.065 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-mon.vm02.log: gzip -5 --verbose -- /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-osd.1.log 2026-03-06T23:45:14.068 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-osd.2.log: gzip -5 --verbose -- /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-volume.log 2026-03-06T23:45:14.069 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph.audit.log: 91.1% -- replaced with /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph.audit.log.gz 2026-03-06T23:45:14.073 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-mgr.vm02.opvwec.log: gzip -5 --verbose -- /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-osd.7.log 2026-03-06T23:45:14.080 INFO:teuthology.orchestra.run.vm07.stderr: 92.9% -- replaced with /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-osd.5.log.gz 2026-03-06T23:45:14.080 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph.cephadm.log 2026-03-06T23:45:14.080 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-osd.1.log: gzip -5 --verbose -- /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph.audit.log 2026-03-06T23:45:14.084 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-osd.7.log: 93.0% -- replaced with /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-osd.3.log.gz 2026-03-06T23:45:14.088 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-volume.log 2026-03-06T23:45:14.097 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph.audit.log: 90.9% -- replaced with /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph.audit.log.gz 2026-03-06T23:45:14.097 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph.cephadm.log 2026-03-06T23:45:14.101 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-osd.4.log 2026-03-06T23:45:14.101 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph.cephadm.log: 83.6% -- replaced with /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph.cephadm.log.gz 2026-03-06T23:45:14.102 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-mon.vm07.log 2026-03-06T23:45:14.102 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph.cephadm.log: 83.0% -- replaced with /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph.cephadm.log.gz 2026-03-06T23:45:14.102 INFO:teuthology.orchestra.run.vm07.stderr: 92.9% -- replaced with /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-osd.6.log.gz 2026-03-06T23:45:14.103 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-osd.0.log 2026-03-06T23:45:14.115 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-mon.vm07.log: 92.7% -- replaced with /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-osd.2.log.gz 2026-03-06T23:45:14.115 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-mgr.vm07.jbleen.log 2026-03-06T23:45:14.116 INFO:teuthology.orchestra.run.vm07.stderr: 93.4% -- replaced with /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-volume.log.gz 2026-03-06T23:45:14.118 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-osd.0.log: /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-mgr.vm07.jbleen.log: 91.7% -- replaced with /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-mgr.vm07.jbleen.log.gz 2026-03-06T23:45:14.149 INFO:teuthology.orchestra.run.vm07.stderr: 92.8% -- replaced with /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-osd.0.log.gz 2026-03-06T23:45:14.151 INFO:teuthology.orchestra.run.vm07.stderr: 92.9% -- replaced with /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-mon.vm07.log.gz 2026-03-06T23:45:14.152 INFO:teuthology.orchestra.run.vm07.stderr: 2026-03-06T23:45:14.152 INFO:teuthology.orchestra.run.vm07.stderr:real 0m0.112s 2026-03-06T23:45:14.152 INFO:teuthology.orchestra.run.vm07.stderr:user 0m0.206s 2026-03-06T23:45:14.152 INFO:teuthology.orchestra.run.vm07.stderr:sys 0m0.008s 2026-03-06T23:45:14.163 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-osd.4.log: 92.9% 93.3% -- replaced with /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-osd.1.log.gz 2026-03-06T23:45:14.173 INFO:teuthology.orchestra.run.vm02.stderr: -- replaced with /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-volume.log.gz 2026-03-06T23:45:14.176 INFO:teuthology.orchestra.run.vm02.stderr: 92.9% -- replaced with /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-osd.4.log.gz 2026-03-06T23:45:14.187 INFO:teuthology.orchestra.run.vm02.stderr: 90.1% 92.7% -- replaced with /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-osd.7.log.gz 2026-03-06T23:45:14.188 INFO:teuthology.orchestra.run.vm02.stderr: -- replaced with /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-mgr.vm02.opvwec.log.gz 2026-03-06T23:45:14.243 INFO:teuthology.orchestra.run.vm02.stderr: 91.2% -- replaced with /var/log/ceph/f8b8c16a-19ac-11f1-87e7-9b7402b99c44/ceph-mon.vm02.log.gz 2026-03-06T23:45:14.244 INFO:teuthology.orchestra.run.vm02.stderr: 2026-03-06T23:45:14.244 INFO:teuthology.orchestra.run.vm02.stderr:real 0m0.205s 2026-03-06T23:45:14.244 INFO:teuthology.orchestra.run.vm02.stderr:user 0m0.327s 2026-03-06T23:45:14.244 INFO:teuthology.orchestra.run.vm02.stderr:sys 0m0.022s 2026-03-06T23:45:14.245 INFO:tasks.cephadm:Archiving logs... 2026-03-06T23:45:14.245 DEBUG:teuthology.misc:Transferring archived files from vm02:/var/log/ceph to /archive/irq0-2026-03-06_20:21:59-orch:cephadm:smoke-roleless-cobaltcore-storage-v19.2.3-fasttrack-5-none-default-vps/412/remote/vm02/log 2026-03-06T23:45:14.245 DEBUG:teuthology.orchestra.run.vm02:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-06T23:45:14.319 DEBUG:teuthology.misc:Transferring archived files from vm07:/var/log/ceph to /archive/irq0-2026-03-06_20:21:59-orch:cephadm:smoke-roleless-cobaltcore-storage-v19.2.3-fasttrack-5-none-default-vps/412/remote/vm07/log 2026-03-06T23:45:14.319 DEBUG:teuthology.orchestra.run.vm07:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-06T23:45:14.338 INFO:tasks.cephadm:Removing cluster... 2026-03-06T23:45:14.338 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 --force 2026-03-06T23:45:14.595 INFO:teuthology.orchestra.run.vm02.stdout:Deleting cluster with fsid: f8b8c16a-19ac-11f1-87e7-9b7402b99c44 2026-03-06T23:45:15.660 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid f8b8c16a-19ac-11f1-87e7-9b7402b99c44 --force 2026-03-06T23:45:15.894 INFO:teuthology.orchestra.run.vm07.stdout:Deleting cluster with fsid: f8b8c16a-19ac-11f1-87e7-9b7402b99c44 2026-03-06T23:45:16.982 INFO:tasks.cephadm:Removing cephadm ... 2026-03-06T23:45:16.982 DEBUG:teuthology.orchestra.run.vm02:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-06T23:45:16.985 DEBUG:teuthology.orchestra.run.vm07:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-06T23:45:16.989 INFO:tasks.cephadm:Teardown complete 2026-03-06T23:45:16.989 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-06T23:45:16.991 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-06T23:45:16.991 DEBUG:teuthology.orchestra.run.vm02:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-06T23:45:17.029 DEBUG:teuthology.orchestra.run.vm07:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-06T23:45:17.282 INFO:teuthology.orchestra.run.vm07.stdout: remote refid st t when poll reach delay offset jitter 2026-03-06T23:45:17.282 INFO:teuthology.orchestra.run.vm07.stdout:============================================================================== 2026-03-06T23:45:17.282 INFO:teuthology.orchestra.run.vm07.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-06T23:45:17.282 INFO:teuthology.orchestra.run.vm07.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-06T23:45:17.282 INFO:teuthology.orchestra.run.vm07.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-06T23:45:17.282 INFO:teuthology.orchestra.run.vm07.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-06T23:45:17.282 INFO:teuthology.orchestra.run.vm07.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-06T23:45:17.282 INFO:teuthology.orchestra.run.vm07.stdout:+stage3.opensuse 127.51.226.51 3 u 25 64 377 24.992 -0.341 1.997 2026-03-06T23:45:17.282 INFO:teuthology.orchestra.run.vm07.stdout:-ntp.kernfusion. 237.17.204.95 2 u 28 64 377 37.599 +3.274 1.543 2026-03-06T23:45:17.282 INFO:teuthology.orchestra.run.vm07.stdout:-s7.vonderste.in 137.226.119.25 2 u 25 64 377 28.327 -2.048 1.711 2026-03-06T23:45:17.282 INFO:teuthology.orchestra.run.vm07.stdout:*185.13.148.71 79.133.44.146 2 u 25 64 377 31.980 +0.365 1.079 2026-03-06T23:45:17.282 INFO:teuthology.orchestra.run.vm07.stdout:#tor.nocabal.de 131.188.3.222 2 u 22 64 377 25.350 +0.319 1.441 2026-03-06T23:45:17.282 INFO:teuthology.orchestra.run.vm07.stdout:+time2.sebhostin 127.65.222.189 2 u 28 64 377 28.927 +0.795 1.047 2026-03-06T23:45:17.283 INFO:teuthology.orchestra.run.vm07.stdout:-185.125.190.58 145.238.80.80 2 u 39 64 377 34.430 -1.168 2.566 2026-03-06T23:45:17.283 INFO:teuthology.orchestra.run.vm07.stdout:+mail.gunnarhofm 192.53.103.103 2 u 24 64 377 25.085 +0.237 0.549 2026-03-06T23:45:17.283 INFO:teuthology.orchestra.run.vm07.stdout:-x1.ncomputers.o 82.64.42.185 2 u 25 64 377 32.364 +2.703 2.836 2026-03-06T23:45:17.283 INFO:teuthology.orchestra.run.vm07.stdout:-185.125.190.57 194.121.207.249 2 u 41 64 377 35.644 -0.624 3.115 2026-03-06T23:45:17.283 INFO:teuthology.orchestra.run.vm07.stdout:#ip217-154-182-6 37.15.221.189 2 u 25 64 377 67.416 -6.273 1.076 2026-03-06T23:45:17.283 INFO:teuthology.orchestra.run.vm07.stdout:#alphyn.canonica 132.163.96.1 2 u 38 64 377 96.586 -1.465 3.035 2026-03-06T23:45:17.283 INFO:teuthology.orchestra.run.vm07.stdout:-185.125.190.56 194.121.207.249 2 u 34 64 377 35.823 -0.703 2.207 2026-03-06T23:45:17.283 INFO:teuthology.orchestra.run.vm02.stdout: remote refid st t when poll reach delay offset jitter 2026-03-06T23:45:17.283 INFO:teuthology.orchestra.run.vm02.stdout:============================================================================== 2026-03-06T23:45:17.283 INFO:teuthology.orchestra.run.vm02.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-06T23:45:17.283 INFO:teuthology.orchestra.run.vm02.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-06T23:45:17.283 INFO:teuthology.orchestra.run.vm02.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-06T23:45:17.283 INFO:teuthology.orchestra.run.vm02.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-06T23:45:17.283 INFO:teuthology.orchestra.run.vm02.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-06T23:45:17.283 INFO:teuthology.orchestra.run.vm02.stdout:+ntp2.kernfusion 192.53.103.108 2 u 30 64 377 31.612 -1.788 4.221 2026-03-06T23:45:17.283 INFO:teuthology.orchestra.run.vm02.stdout:+router02.i-tk.d 192.168.125.22 2 u 31 64 377 48.033 -0.467 2.550 2026-03-06T23:45:17.283 INFO:teuthology.orchestra.run.vm02.stdout:+185.13.148.71 79.133.44.146 2 u 22 64 377 31.959 -0.623 4.273 2026-03-06T23:45:17.283 INFO:teuthology.orchestra.run.vm02.stdout:*s7.vonderste.in 137.226.119.25 2 u 27 64 377 28.333 -1.020 2.211 2026-03-06T23:45:17.283 INFO:teuthology.orchestra.run.vm02.stdout:#ntp.kernfusion. 237.17.204.95 2 u 20 64 377 37.673 +2.370 4.268 2026-03-06T23:45:17.283 INFO:teuthology.orchestra.run.vm02.stdout:+x1.ncomputers.o 82.64.42.185 2 u 27 64 377 31.514 -0.479 4.269 2026-03-06T23:45:17.283 INFO:teuthology.orchestra.run.vm02.stdout:#ip217-154-182-6 37.15.221.189 2 u 24 64 377 66.965 -6.858 4.267 2026-03-06T23:45:17.283 INFO:teuthology.orchestra.run.vm02.stdout:#alphyn.canonica 132.163.96.1 2 u 38 64 377 101.415 -4.629 4.040 2026-03-06T23:45:17.283 INFO:teuthology.orchestra.run.vm02.stdout:+mail.gunnarhofm 192.53.103.103 2 u 23 64 377 25.151 -0.706 4.266 2026-03-06T23:45:17.283 INFO:teuthology.orchestra.run.vm02.stdout:+tor.nocabal.de 131.188.3.222 2 u 22 64 377 25.384 -0.686 4.206 2026-03-06T23:45:17.283 INFO:teuthology.orchestra.run.vm02.stdout:+185.125.190.57 194.121.207.249 2 u 33 64 377 34.308 -2.308 4.052 2026-03-06T23:45:17.283 INFO:teuthology.orchestra.run.vm02.stdout:+time2.sebhostin 127.65.222.189 2 u 20 64 377 28.921 -0.133 4.185 2026-03-06T23:45:17.283 INFO:teuthology.orchestra.run.vm02.stdout:+185.125.190.56 194.121.207.249 2 u 31 64 377 31.798 +0.298 3.971 2026-03-06T23:45:17.283 INFO:teuthology.orchestra.run.vm02.stdout:+185.125.190.58 145.238.80.80 2 u 33 64 377 30.339 -0.116 4.013 2026-03-06T23:45:17.283 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-06T23:45:17.286 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-06T23:45:17.286 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-06T23:45:17.288 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-06T23:45:17.290 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-06T23:45:17.292 INFO:teuthology.task.internal:Duration was 635.674380 seconds 2026-03-06T23:45:17.292 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-06T23:45:17.295 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-06T23:45:17.295 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-06T23:45:17.296 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-06T23:45:17.323 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-06T23:45:17.323 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm02.local 2026-03-06T23:45:17.323 DEBUG:teuthology.orchestra.run.vm02:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-06T23:45:17.374 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm07.local 2026-03-06T23:45:17.374 DEBUG:teuthology.orchestra.run.vm07:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-06T23:45:17.386 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-06T23:45:17.387 DEBUG:teuthology.orchestra.run.vm02:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-06T23:45:17.417 DEBUG:teuthology.orchestra.run.vm07:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-06T23:45:17.469 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-06T23:45:17.469 DEBUG:teuthology.orchestra.run.vm02:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-06T23:45:17.470 DEBUG:teuthology.orchestra.run.vm07:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-06T23:45:17.476 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-06T23:45:17.476 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-06T23:45:17.476 INFO:teuthology.orchestra.run.vm02.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-06T23:45:17.476 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-06T23:45:17.476 INFO:teuthology.orchestra.run.vm02.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-06T23:45:17.487 INFO:teuthology.orchestra.run.vm02.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 89.7% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-06T23:45:17.515 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-06T23:45:17.516 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-06T23:45:17.516 INFO:teuthology.orchestra.run.vm07.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-06T23:45:17.516 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-06T23:45:17.516 INFO:teuthology.orchestra.run.vm07.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0%/home/ubuntu/cephtest/archive/syslog/journalctl.log: -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-06T23:45:17.523 INFO:teuthology.orchestra.run.vm07.stderr: 90.1% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-06T23:45:17.525 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-06T23:45:17.527 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-06T23:45:17.528 DEBUG:teuthology.orchestra.run.vm02:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-06T23:45:17.541 DEBUG:teuthology.orchestra.run.vm07:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-06T23:45:17.577 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-06T23:45:17.580 DEBUG:teuthology.orchestra.run.vm02:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-06T23:45:17.585 DEBUG:teuthology.orchestra.run.vm07:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-06T23:45:17.590 INFO:teuthology.orchestra.run.vm02.stdout:kernel.core_pattern = core 2026-03-06T23:45:17.627 INFO:teuthology.orchestra.run.vm07.stdout:kernel.core_pattern = core 2026-03-06T23:45:17.636 DEBUG:teuthology.orchestra.run.vm02:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-06T23:45:17.643 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-06T23:45:17.643 DEBUG:teuthology.orchestra.run.vm07:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-06T23:45:17.683 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-06T23:45:17.683 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-06T23:45:17.685 INFO:teuthology.task.internal:Transferring archived files... 2026-03-06T23:45:17.686 DEBUG:teuthology.misc:Transferring archived files from vm02:/home/ubuntu/cephtest/archive to /archive/irq0-2026-03-06_20:21:59-orch:cephadm:smoke-roleless-cobaltcore-storage-v19.2.3-fasttrack-5-none-default-vps/412/remote/vm02 2026-03-06T23:45:17.686 DEBUG:teuthology.orchestra.run.vm02:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-06T23:45:17.695 DEBUG:teuthology.misc:Transferring archived files from vm07:/home/ubuntu/cephtest/archive to /archive/irq0-2026-03-06_20:21:59-orch:cephadm:smoke-roleless-cobaltcore-storage-v19.2.3-fasttrack-5-none-default-vps/412/remote/vm07 2026-03-06T23:45:17.695 DEBUG:teuthology.orchestra.run.vm07:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-06T23:45:17.732 INFO:teuthology.task.internal:Removing archive directory... 2026-03-06T23:45:17.733 DEBUG:teuthology.orchestra.run.vm02:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-06T23:45:17.737 DEBUG:teuthology.orchestra.run.vm07:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-06T23:45:17.779 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-06T23:45:17.783 INFO:teuthology.task.internal:Not uploading archives. 2026-03-06T23:45:17.783 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-06T23:45:17.786 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-06T23:45:17.786 DEBUG:teuthology.orchestra.run.vm02:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-06T23:45:17.787 DEBUG:teuthology.orchestra.run.vm07:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-06T23:45:17.789 INFO:teuthology.orchestra.run.vm02.stdout: 258078 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 6 23:45 /home/ubuntu/cephtest 2026-03-06T23:45:17.823 INFO:teuthology.orchestra.run.vm07.stdout: 258077 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 6 23:45 /home/ubuntu/cephtest 2026-03-06T23:45:17.824 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-06T23:45:17.830 INFO:teuthology.run:Summary data: description: orch:cephadm:smoke-roleless/{0-distro/ubuntu_22.04 1-start 2-services/jaeger 3-final} duration: 635.6743795871735 owner: irq0 success: true 2026-03-06T23:45:17.830 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-06T23:45:17.851 INFO:teuthology.run:pass